Test Report: Docker_Linux_crio_arm64 19740

                    
                      f4f6e0076e771cedcca340e072cd1813dc91a89c:2024-10-02:36461
                    
                

Test fail (2/327)

Order failed test Duration
34 TestAddons/parallel/Ingress 151.25
36 TestAddons/parallel/MetricsServer 338.92
x
+
TestAddons/parallel/Ingress (151.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-902832 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-902832 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-902832 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7844b518-e81f-4ffd-a040-f38acb1ff64d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7844b518-e81f-4ffd-a040-f38acb1ff64d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003783373s
I1002 00:01:55.500607 1468453 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-902832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.748939934s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-902832 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-902832
helpers_test.go:235: (dbg) docker inspect addons-902832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664",
	        "Created": "2024-10-01T23:48:33.177211615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1469706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T23:48:33.306193379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/hostname",
	        "HostsPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/hosts",
	        "LogPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664-json.log",
	        "Name": "/addons-902832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-902832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-902832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb-init/diff:/var/lib/docker/overlay2/a3930beaaef2dcba1a61f406e1fdc853ce637c87ef61fa93a286e9e50993b951/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-902832",
	                "Source": "/var/lib/docker/volumes/addons-902832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-902832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-902832",
	                "name.minikube.sigs.k8s.io": "addons-902832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8042e82f667c42dcf5dc036da3e36737da63298a2ba0bbda92fdd57e5051eb88",
	            "SandboxKey": "/var/run/docker/netns/8042e82f667c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34296"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34297"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-902832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "51635cd5ac36d0dc534d71775aefdac2f936c0b4261dead30f5dc6b0bafee43e",
	                    "EndpointID": "74eba1b9edd80348539063b917b90f54ddf72306bc662f0e484a3002e5b81402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-902832",
	                        "7624d238c4e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-902832 -n addons-902832
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 logs -n 25: (1.581297089s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-732922                                                                     | download-only-732922   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| delete  | -p download-only-481946                                                                     | download-only-481946   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| start   | --download-only -p                                                                          | download-docker-549806 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | download-docker-549806                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-549806                                                                   | download-docker-549806 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-904477   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | binary-mirror-904477                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33775                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-904477                                                                     | binary-mirror-904477   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | addons-902832                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | addons-902832                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-902832 --wait=true                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:51 UTC | 01 Oct 24 23:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:59 UTC | 02 Oct 24 00:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | -p addons-902832                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-902832 ip                                                                            | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | -p addons-902832                                                                            |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-902832 ssh cat                                                                       | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | /opt/local-path-provisioner/pvc-cf99ba77-1628-40e8-9e38-1970b272e06c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-902832 ssh curl -s                                                                   | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-902832 ip                                                                            | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:48:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:48:09.068891 1469207 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:48:09.069027 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:09.069038 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:48:09.069043 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:09.069264 1469207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1001 23:48:09.069685 1469207 out.go:352] Setting JSON to false
	I1001 23:48:09.070706 1469207 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19829,"bootTime":1727806660,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1001 23:48:09.070797 1469207 start.go:139] virtualization:  
	I1001 23:48:09.073822 1469207 out.go:177] * [addons-902832] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:48:09.075733 1469207 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:48:09.075775 1469207 notify.go:220] Checking for updates...
	I1001 23:48:09.078742 1469207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:48:09.079918 1469207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:48:09.081101 1469207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1001 23:48:09.082361 1469207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 23:48:09.083848 1469207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:48:09.085283 1469207 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:48:09.105988 1469207 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:48:09.106128 1469207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:09.157252 1469207 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:48:09.147046206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:09.157429 1469207 docker.go:318] overlay module found
	I1001 23:48:09.160140 1469207 out.go:177] * Using the docker driver based on user configuration
	I1001 23:48:09.161776 1469207 start.go:297] selected driver: docker
	I1001 23:48:09.161802 1469207 start.go:901] validating driver "docker" against <nil>
	I1001 23:48:09.161828 1469207 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:48:09.162495 1469207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:09.211620 1469207 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:48:09.202421967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:09.211832 1469207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:48:09.212072 1469207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:48:09.214105 1469207 out.go:177] * Using Docker driver with root privileges
	I1001 23:48:09.215756 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:09.215823 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:09.215835 1469207 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:48:09.215906 1469207 start.go:340] cluster config:
	{Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:48:09.217609 1469207 out.go:177] * Starting "addons-902832" primary control-plane node in "addons-902832" cluster
	I1001 23:48:09.218971 1469207 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 23:48:09.220675 1469207 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 23:48:09.222570 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:09.222625 1469207 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1001 23:48:09.222638 1469207 cache.go:56] Caching tarball of preloaded images
	I1001 23:48:09.222668 1469207 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:48:09.222721 1469207 preload.go:172] Found /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1001 23:48:09.222731 1469207 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:48:09.223083 1469207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json ...
	I1001 23:48:09.223142 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json: {Name:mkf0c7c65aa397d04b9c786920da3f0162eb288c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:09.237560 1469207 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:48:09.237696 1469207 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:48:09.237738 1469207 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 23:48:09.237744 1469207 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 23:48:09.237752 1469207 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 23:48:09.237757 1469207 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 23:48:26.096603 1469207 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 23:48:26.096644 1469207 cache.go:194] Successfully downloaded all kic artifacts
	I1001 23:48:26.096685 1469207 start.go:360] acquireMachinesLock for addons-902832: {Name:mk9b70b1d6aef24ed741e07d772b84dae38e28fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:48:26.097184 1469207 start.go:364] duration metric: took 473.162µs to acquireMachinesLock for "addons-902832"
	I1001 23:48:26.097221 1469207 start.go:93] Provisioning new machine with config: &{Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:48:26.097328 1469207 start.go:125] createHost starting for "" (driver="docker")
	I1001 23:48:26.098943 1469207 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 23:48:26.099205 1469207 start.go:159] libmachine.API.Create for "addons-902832" (driver="docker")
	I1001 23:48:26.099245 1469207 client.go:168] LocalClient.Create starting
	I1001 23:48:26.099355 1469207 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem
	I1001 23:48:26.491466 1469207 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem
	I1001 23:48:27.046012 1469207 cli_runner.go:164] Run: docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 23:48:27.059629 1469207 cli_runner.go:211] docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 23:48:27.059737 1469207 network_create.go:284] running [docker network inspect addons-902832] to gather additional debugging logs...
	I1001 23:48:27.059763 1469207 cli_runner.go:164] Run: docker network inspect addons-902832
	W1001 23:48:27.075017 1469207 cli_runner.go:211] docker network inspect addons-902832 returned with exit code 1
	I1001 23:48:27.075050 1469207 network_create.go:287] error running [docker network inspect addons-902832]: docker network inspect addons-902832: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-902832 not found
	I1001 23:48:27.075066 1469207 network_create.go:289] output of [docker network inspect addons-902832]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-902832 not found
	
	** /stderr **
	I1001 23:48:27.075169 1469207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:48:27.099547 1469207 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017f34e0}
	I1001 23:48:27.099592 1469207 network_create.go:124] attempt to create docker network addons-902832 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 23:48:27.099651 1469207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-902832 addons-902832
	I1001 23:48:27.168229 1469207 network_create.go:108] docker network addons-902832 192.168.49.0/24 created
	I1001 23:48:27.168261 1469207 kic.go:121] calculated static IP "192.168.49.2" for the "addons-902832" container
	I1001 23:48:27.168340 1469207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 23:48:27.183509 1469207 cli_runner.go:164] Run: docker volume create addons-902832 --label name.minikube.sigs.k8s.io=addons-902832 --label created_by.minikube.sigs.k8s.io=true
	I1001 23:48:27.199863 1469207 oci.go:103] Successfully created a docker volume addons-902832
	I1001 23:48:27.199960 1469207 cli_runner.go:164] Run: docker run --rm --name addons-902832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --entrypoint /usr/bin/test -v addons-902832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 23:48:29.066502 1469207 cli_runner.go:217] Completed: docker run --rm --name addons-902832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --entrypoint /usr/bin/test -v addons-902832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.866477857s)
	I1001 23:48:29.066532 1469207 oci.go:107] Successfully prepared a docker volume addons-902832
	I1001 23:48:29.066558 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:29.066579 1469207 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 23:48:29.066647 1469207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-902832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 23:48:33.108442 1469207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-902832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.04175358s)
	I1001 23:48:33.108497 1469207 kic.go:203] duration metric: took 4.041915307s to extract preloaded images to volume ...
	W1001 23:48:33.108647 1469207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 23:48:33.108780 1469207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 23:48:33.163355 1469207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-902832 --name addons-902832 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-902832 --network addons-902832 --ip 192.168.49.2 --volume addons-902832:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 23:48:33.453713 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Running}}
	I1001 23:48:33.476999 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.500958 1469207 cli_runner.go:164] Run: docker exec addons-902832 stat /var/lib/dpkg/alternatives/iptables
	I1001 23:48:33.558736 1469207 oci.go:144] the created container "addons-902832" has a running status.
	I1001 23:48:33.558762 1469207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa...
	I1001 23:48:33.746150 1469207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 23:48:33.769726 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.798242 1469207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 23:48:33.798260 1469207 kic_runner.go:114] Args: [docker exec --privileged addons-902832 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 23:48:33.861110 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.887303 1469207 machine.go:93] provisionDockerMachine start ...
	I1001 23:48:33.887396 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:33.914677 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:33.914942 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:33.914952 1469207 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:48:33.916202 1469207 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1001 23:48:37.054819 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-902832
	
	I1001 23:48:37.054907 1469207 ubuntu.go:169] provisioning hostname "addons-902832"
	I1001 23:48:37.055020 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.072218 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.072467 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.072486 1469207 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-902832 && echo "addons-902832" | sudo tee /etc/hostname
	I1001 23:48:37.219276 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-902832
	
	I1001 23:48:37.219366 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.237234 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.237473 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.237500 1469207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-902832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-902832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-902832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:48:37.370975 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:48:37.371002 1469207 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-1463060/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-1463060/.minikube}
	I1001 23:48:37.371030 1469207 ubuntu.go:177] setting up certificates
	I1001 23:48:37.371041 1469207 provision.go:84] configureAuth start
	I1001 23:48:37.371100 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:37.387340 1469207 provision.go:143] copyHostCerts
	I1001 23:48:37.387416 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/cert.pem (1123 bytes)
	I1001 23:48:37.387552 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/key.pem (1679 bytes)
	I1001 23:48:37.387624 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.pem (1082 bytes)
	I1001 23:48:37.387673 1469207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem org=jenkins.addons-902832 san=[127.0.0.1 192.168.49.2 addons-902832 localhost minikube]
	I1001 23:48:37.734071 1469207 provision.go:177] copyRemoteCerts
	I1001 23:48:37.734170 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:48:37.734216 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.750698 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:37.847747 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:48:37.871254 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 23:48:37.895006 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:48:37.917817 1469207 provision.go:87] duration metric: took 546.753417ms to configureAuth
	I1001 23:48:37.917842 1469207 ubuntu.go:193] setting minikube options for container-runtime
	I1001 23:48:37.918029 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:48:37.918141 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.936641 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.936888 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.936907 1469207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:48:38.174604 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:48:38.174624 1469207 machine.go:96] duration metric: took 4.28730214s to provisionDockerMachine
	I1001 23:48:38.174634 1469207 client.go:171] duration metric: took 12.075381145s to LocalClient.Create
	I1001 23:48:38.174648 1469207 start.go:167] duration metric: took 12.075444003s to libmachine.API.Create "addons-902832"
	I1001 23:48:38.174655 1469207 start.go:293] postStartSetup for "addons-902832" (driver="docker")
	I1001 23:48:38.174665 1469207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:48:38.174725 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:48:38.174770 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.192010 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.288955 1469207 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:48:38.291799 1469207 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 23:48:38.291835 1469207 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 23:48:38.291847 1469207 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 23:48:38.291854 1469207 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 23:48:38.291868 1469207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1463060/.minikube/addons for local assets ...
	I1001 23:48:38.291940 1469207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1463060/.minikube/files for local assets ...
	I1001 23:48:38.291972 1469207 start.go:296] duration metric: took 117.311194ms for postStartSetup
	I1001 23:48:38.292286 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:38.308759 1469207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json ...
	I1001 23:48:38.309036 1469207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:48:38.309099 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.324673 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.415951 1469207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 23:48:38.420215 1469207 start.go:128] duration metric: took 12.322869972s to createHost
	I1001 23:48:38.420288 1469207 start.go:83] releasing machines lock for "addons-902832", held for 12.323086148s
	I1001 23:48:38.420391 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:38.436243 1469207 ssh_runner.go:195] Run: cat /version.json
	I1001 23:48:38.436293 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.436309 1469207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:48:38.436383 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.454482 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.461082 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.681283 1469207 ssh_runner.go:195] Run: systemctl --version
	I1001 23:48:38.685621 1469207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:48:38.829234 1469207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 23:48:38.833407 1469207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:48:38.854114 1469207 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 23:48:38.854233 1469207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:48:38.883936 1469207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 23:48:38.883962 1469207 start.go:495] detecting cgroup driver to use...
	I1001 23:48:38.883995 1469207 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 23:48:38.884047 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:48:38.900057 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:48:38.911936 1469207 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:48:38.912005 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:48:38.925324 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:48:38.939805 1469207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:48:39.028416 1469207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:48:39.126944 1469207 docker.go:233] disabling docker service ...
	I1001 23:48:39.127055 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:48:39.149248 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:48:39.161253 1469207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:48:39.263791 1469207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:48:39.363479 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:48:39.374836 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:48:39.391099 1469207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:48:39.391242 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.401063 1469207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:48:39.401179 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.411053 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.421638 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.431707 1469207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:48:39.441198 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.451541 1469207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.467518 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.477676 1469207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:48:39.487379 1469207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:48:39.495644 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:48:39.578225 1469207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:48:39.690484 1469207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:48:39.690627 1469207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:48:39.694185 1469207 start.go:563] Will wait 60s for crictl version
	I1001 23:48:39.694246 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:48:39.697533 1469207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:48:39.734732 1469207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 23:48:39.734848 1469207 ssh_runner.go:195] Run: crio --version
	I1001 23:48:39.771965 1469207 ssh_runner.go:195] Run: crio --version
	I1001 23:48:39.809513 1469207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1001 23:48:39.810643 1469207 cli_runner.go:164] Run: docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:48:39.824289 1469207 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 23:48:39.827781 1469207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:48:39.838374 1469207 kubeadm.go:883] updating cluster {Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:48:39.838496 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:39.838558 1469207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:48:39.908781 1469207 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:48:39.908803 1469207 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:48:39.908859 1469207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:48:39.947028 1469207 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:48:39.947051 1469207 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:48:39.947060 1469207 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1001 23:48:39.947162 1469207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-902832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:48:39.947258 1469207 ssh_runner.go:195] Run: crio config
	I1001 23:48:39.992161 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:39.992185 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:39.992195 1469207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:48:39.992217 1469207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-902832 NodeName:addons-902832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:48:39.992364 1469207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-902832"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:48:39.992436 1469207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:48:40.001226 1469207 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:48:40.001317 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:48:40.018850 1469207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1001 23:48:40.039861 1469207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:48:40.059729 1469207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1001 23:48:40.078342 1469207 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 23:48:40.082159 1469207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:48:40.094632 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:48:40.179150 1469207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:48:40.193853 1469207 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832 for IP: 192.168.49.2
	I1001 23:48:40.193871 1469207 certs.go:194] generating shared ca certs ...
	I1001 23:48:40.193888 1469207 certs.go:226] acquiring lock for ca certs: {Name:mk3f5ff76a5b6681ba8f6985f72e49b1d01e9c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.194027 1469207 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key
	I1001 23:48:40.428355 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt ...
	I1001 23:48:40.428385 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt: {Name:mk482523cb013c30b3ab046472a810fd35f37123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.429026 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key ...
	I1001 23:48:40.429043 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key: {Name:mke64b9ef4d3d7b41b267e67131531c63f1dfe18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.429166 1469207 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key
	I1001 23:48:40.782587 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt ...
	I1001 23:48:40.782617 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt: {Name:mk51fbc20708189d025a430fb8ae145cb131ba4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.782801 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key ...
	I1001 23:48:40.782813 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key: {Name:mk427c0c8b72ae0caed528d2040b8d9247afdbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.782891 1469207 certs.go:256] generating profile certs ...
	I1001 23:48:40.782952 1469207 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key
	I1001 23:48:40.782968 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt with IP's: []
	I1001 23:48:41.212760 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt ...
	I1001 23:48:41.212796 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: {Name:mk9eb884be1ea85b9b4c9866fd707cae21a89748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:41.212995 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key ...
	I1001 23:48:41.213008 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key: {Name:mk49ef10649d22c01fc6b3445c976763c2dd36cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:41.213555 1469207 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890
	I1001 23:48:41.213581 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 23:48:42.218783 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 ...
	I1001 23:48:42.218817 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890: {Name:mke5876db42c8dd84bc0fcc3061d4d6eeee90942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.219013 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890 ...
	I1001 23:48:42.219027 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890: {Name:mk27e5000269ab7faf0202e7ed91abf6b232c400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.219125 1469207 certs.go:381] copying /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 -> /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt
	I1001 23:48:42.219227 1469207 certs.go:385] copying /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890 -> /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key
	I1001 23:48:42.219286 1469207 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key
	I1001 23:48:42.219309 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt with IP's: []
	I1001 23:48:42.569725 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt ...
	I1001 23:48:42.569757 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt: {Name:mk417b9830923bd6e8c521aad7faed88ddb7228d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.569948 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key ...
	I1001 23:48:42.569962 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key: {Name:mk1dc85790736919b082a6218cc7cc5613fad41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.570152 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:48:42.570194 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem (1082 bytes)
	I1001 23:48:42.570218 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:48:42.570250 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem (1679 bytes)
	I1001 23:48:42.570838 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:48:42.596372 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 23:48:42.620146 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:48:42.643309 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:48:42.666497 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 23:48:42.690598 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:48:42.713920 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:48:42.737177 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:48:42.760433 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:48:42.783806 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:48:42.801423 1469207 ssh_runner.go:195] Run: openssl version
	I1001 23:48:42.806694 1469207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:48:42.815879 1469207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.819067 1469207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.819128 1469207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.825696 1469207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:48:42.834651 1469207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:48:42.837867 1469207 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:48:42.837935 1469207 kubeadm.go:392] StartCluster: {Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:48:42.838028 1469207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:48:42.838095 1469207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:48:42.874460 1469207 cri.go:89] found id: ""
	I1001 23:48:42.874535 1469207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:48:42.883341 1469207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:48:42.891876 1469207 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 23:48:42.892007 1469207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:48:42.900749 1469207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:48:42.900769 1469207 kubeadm.go:157] found existing configuration files:
	
	I1001 23:48:42.900847 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:48:42.909316 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:48:42.909409 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:48:42.917945 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:48:42.926922 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:48:42.926991 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:48:42.935402 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:48:42.943850 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:48:42.943914 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:48:42.952251 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:48:42.961152 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:48:42.961246 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:48:42.969200 1469207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 23:48:43.013654 1469207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:48:43.013715 1469207 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:48:43.033727 1469207 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 23:48:43.033806 1469207 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1001 23:48:43.033845 1469207 kubeadm.go:310] OS: Linux
	I1001 23:48:43.033895 1469207 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 23:48:43.033946 1469207 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 23:48:43.033997 1469207 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 23:48:43.034051 1469207 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 23:48:43.034101 1469207 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 23:48:43.034159 1469207 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 23:48:43.034211 1469207 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 23:48:43.034268 1469207 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 23:48:43.034318 1469207 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 23:48:43.099436 1469207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:48:43.099566 1469207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:48:43.099678 1469207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:48:43.111574 1469207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:48:43.115690 1469207 out.go:235]   - Generating certificates and keys ...
	I1001 23:48:43.115900 1469207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:48:43.115985 1469207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:48:43.295288 1469207 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:48:44.120949 1469207 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:48:44.475930 1469207 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:48:44.911551 1469207 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:48:46.185018 1469207 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:48:46.185272 1469207 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-902832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:48:46.544115 1469207 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:48:46.544255 1469207 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-902832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:48:46.818696 1469207 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:48:47.136978 1469207 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:48:47.521407 1469207 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:48:47.521591 1469207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:48:47.913537 1469207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:48:48.164625 1469207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:48:48.391968 1469207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:48:48.710116 1469207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:48:49.047213 1469207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:48:49.047956 1469207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:48:49.053436 1469207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:48:49.055086 1469207 out.go:235]   - Booting up control plane ...
	I1001 23:48:49.055217 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:48:49.055296 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:48:49.056383 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:48:49.066757 1469207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:48:49.072732 1469207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:48:49.072794 1469207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:48:49.166277 1469207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:48:49.166418 1469207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:48:50.167664 1469207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001500285s
	I1001 23:48:50.167760 1469207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:48:55.669893 1469207 kubeadm.go:310] [api-check] The API server is healthy after 5.50221357s
	I1001 23:48:55.689502 1469207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:48:55.702929 1469207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:48:55.724852 1469207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:48:55.725111 1469207 kubeadm.go:310] [mark-control-plane] Marking the node addons-902832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:48:55.735753 1469207 kubeadm.go:310] [bootstrap-token] Using token: np6l28.nq98jby1xj6o1njh
	I1001 23:48:55.737057 1469207 out.go:235]   - Configuring RBAC rules ...
	I1001 23:48:55.737179 1469207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:48:55.744238 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:48:55.750926 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:48:55.754136 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:48:55.757140 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:48:55.762345 1469207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:48:56.078202 1469207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:48:56.504886 1469207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:48:57.077291 1469207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:48:57.078423 1469207 kubeadm.go:310] 
	I1001 23:48:57.078494 1469207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:48:57.078500 1469207 kubeadm.go:310] 
	I1001 23:48:57.078576 1469207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:48:57.078581 1469207 kubeadm.go:310] 
	I1001 23:48:57.078608 1469207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:48:57.078667 1469207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:48:57.078716 1469207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:48:57.078720 1469207 kubeadm.go:310] 
	I1001 23:48:57.078773 1469207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:48:57.078778 1469207 kubeadm.go:310] 
	I1001 23:48:57.078824 1469207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:48:57.078828 1469207 kubeadm.go:310] 
	I1001 23:48:57.078880 1469207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:48:57.078953 1469207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:48:57.079020 1469207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:48:57.079028 1469207 kubeadm.go:310] 
	I1001 23:48:57.079111 1469207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:48:57.079205 1469207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:48:57.079211 1469207 kubeadm.go:310] 
	I1001 23:48:57.079293 1469207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token np6l28.nq98jby1xj6o1njh \
	I1001 23:48:57.079394 1469207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5208ef0b8fca8d57e76f0c6fa712e05fed0b080e4466dd6159bacdcc4fe52560 \
	I1001 23:48:57.079415 1469207 kubeadm.go:310] 	--control-plane 
	I1001 23:48:57.079419 1469207 kubeadm.go:310] 
	I1001 23:48:57.079502 1469207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:48:57.079507 1469207 kubeadm.go:310] 
	I1001 23:48:57.079587 1469207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token np6l28.nq98jby1xj6o1njh \
	I1001 23:48:57.079687 1469207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5208ef0b8fca8d57e76f0c6fa712e05fed0b080e4466dd6159bacdcc4fe52560 
	I1001 23:48:57.082243 1469207 kubeadm.go:310] W1001 23:48:43.010228    1184 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:48:57.082537 1469207 kubeadm.go:310] W1001 23:48:43.011176    1184 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:48:57.082755 1469207 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1001 23:48:57.082866 1469207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:48:57.082884 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:57.082895 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:57.084703 1469207 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:48:57.085971 1469207 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:48:57.090354 1469207 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:48:57.090373 1469207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:48:57.110517 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:48:57.388161 1469207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:48:57.388287 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:57.388362 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-902832 minikube.k8s.io/updated_at=2024_10_01T23_48_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-902832 minikube.k8s.io/primary=true
	I1001 23:48:57.533804 1469207 ops.go:34] apiserver oom_adj: -16
	I1001 23:48:57.533969 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:58.034673 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:58.534540 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:59.034842 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:59.534615 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:00.034213 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:00.534665 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.034662 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.534019 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.636955 1469207 kubeadm.go:1113] duration metric: took 4.248713634s to wait for elevateKubeSystemPrivileges
	I1001 23:49:01.636988 1469207 kubeadm.go:394] duration metric: took 18.799058587s to StartCluster
	I1001 23:49:01.637007 1469207 settings.go:142] acquiring lock: {Name:mk9069fc4941965284bfe98880a9f5d91bac598f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:49:01.637144 1469207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:49:01.637619 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/kubeconfig: {Name:mk74b9b3ba7b209d36f296358939f489e2673d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:49:01.638265 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:49:01.638291 1469207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:49:01.638556 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:49:01.638598 1469207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 23:49:01.638684 1469207 addons.go:69] Setting yakd=true in profile "addons-902832"
	I1001 23:49:01.638699 1469207 addons.go:234] Setting addon yakd=true in "addons-902832"
	I1001 23:49:01.638726 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.639238 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.639579 1469207 addons.go:69] Setting metrics-server=true in profile "addons-902832"
	I1001 23:49:01.639600 1469207 addons.go:234] Setting addon metrics-server=true in "addons-902832"
	I1001 23:49:01.639634 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.640079 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.641636 1469207 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-902832"
	I1001 23:49:01.643464 1469207 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-902832"
	I1001 23:49:01.643572 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.641821 1469207 addons.go:69] Setting registry=true in profile "addons-902832"
	I1001 23:49:01.641839 1469207 addons.go:69] Setting storage-provisioner=true in profile "addons-902832"
	I1001 23:49:01.641847 1469207 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-902832"
	I1001 23:49:01.641855 1469207 addons.go:69] Setting volcano=true in profile "addons-902832"
	I1001 23:49:01.641861 1469207 addons.go:69] Setting volumesnapshots=true in profile "addons-902832"
	I1001 23:49:01.641910 1469207 out.go:177] * Verifying Kubernetes components...
	I1001 23:49:01.642151 1469207 addons.go:69] Setting ingress=true in profile "addons-902832"
	I1001 23:49:01.642158 1469207 addons.go:69] Setting cloud-spanner=true in profile "addons-902832"
	I1001 23:49:01.642164 1469207 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-902832"
	I1001 23:49:01.642169 1469207 addons.go:69] Setting default-storageclass=true in profile "addons-902832"
	I1001 23:49:01.642173 1469207 addons.go:69] Setting gcp-auth=true in profile "addons-902832"
	I1001 23:49:01.642182 1469207 addons.go:69] Setting inspektor-gadget=true in profile "addons-902832"
	I1001 23:49:01.642187 1469207 addons.go:69] Setting ingress-dns=true in profile "addons-902832"
	I1001 23:49:01.643855 1469207 addons.go:234] Setting addon ingress-dns=true in "addons-902832"
	I1001 23:49:01.643912 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.644439 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.647789 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.659716 1469207 addons.go:234] Setting addon ingress=true in "addons-902832"
	I1001 23:49:01.659785 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.660268 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.663972 1469207 addons.go:234] Setting addon registry=true in "addons-902832"
	I1001 23:49:01.664089 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.664605 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670085 1469207 addons.go:234] Setting addon cloud-spanner=true in "addons-902832"
	I1001 23:49:01.670143 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.670146 1469207 addons.go:234] Setting addon storage-provisioner=true in "addons-902832"
	I1001 23:49:01.670185 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.670634 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670647 1469207 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-902832"
	I1001 23:49:01.670881 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.694655 1469207 addons.go:234] Setting addon volcano=true in "addons-902832"
	I1001 23:49:01.694719 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.695228 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670635 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.711323 1469207 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-902832"
	I1001 23:49:01.711423 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.711964 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.717344 1469207 addons.go:234] Setting addon volumesnapshots=true in "addons-902832"
	I1001 23:49:01.717401 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.718007 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.737762 1469207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-902832"
	I1001 23:49:01.737833 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:49:01.738127 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.753155 1469207 mustload.go:65] Loading cluster: addons-902832
	I1001 23:49:01.753390 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:49:01.753690 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.779087 1469207 addons.go:234] Setting addon inspektor-gadget=true in "addons-902832"
	I1001 23:49:01.779140 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.779879 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.811020 1469207 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 23:49:01.822646 1469207 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 23:49:01.829645 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 23:49:01.829812 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 23:49:01.847937 1469207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 23:49:01.848205 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.829846 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 23:49:01.829971 1469207 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 23:49:01.830086 1469207 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 23:49:01.850062 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 23:49:01.850132 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.830091 1469207 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 23:49:01.854133 1469207 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:49:01.854187 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 23:49:01.854284 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.862580 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 23:49:01.862603 1469207 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 23:49:01.862669 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.870271 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:01.870559 1469207 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:49:01.870581 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 23:49:01.870649 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	W1001 23:49:01.883475 1469207 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 23:49:01.925336 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 23:49:01.926158 1469207 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 23:49:01.926715 1469207 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-902832"
	I1001 23:49:01.927381 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.927927 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.966506 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 23:49:01.967450 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:01.969160 1469207 addons.go:234] Setting addon default-storageclass=true in "addons-902832"
	I1001 23:49:01.969253 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.975681 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 23:49:01.977149 1469207 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 23:49:01.975856 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 23:49:01.975969 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.976093 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.977341 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.012100 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 23:49:02.019584 1469207 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:49:02.019615 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 23:49:02.019693 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.977349 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 23:49:02.020920 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.035536 1469207 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 23:49:02.036014 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:49:02.036170 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:49:02.036360 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 23:49:02.043505 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 23:49:02.043536 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 23:49:02.043619 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.043891 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.063925 1469207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:49:02.064029 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:49:02.064531 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.091558 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 23:49:02.101519 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 23:49:02.104105 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.104809 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.126886 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.127989 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.136066 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 23:49:02.139280 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 23:49:02.143543 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 23:49:02.145962 1469207 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 23:49:02.148999 1469207 out.go:177]   - Using image docker.io/busybox:stable
	I1001 23:49:02.149117 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 23:49:02.155617 1469207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:49:02.155642 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 23:49:02.155708 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.175063 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 23:49:02.175128 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 23:49:02.175275 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.179270 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.181671 1469207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:49:02.181686 1469207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:49:02.181751 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.202524 1469207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:49:02.224584 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.225278 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.226876 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.273486 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.273855 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.276920 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.286586 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.416888 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:49:02.548515 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 23:49:02.548578 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 23:49:02.610722 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 23:49:02.610794 1469207 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 23:49:02.613587 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 23:49:02.641055 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 23:49:02.641129 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 23:49:02.653277 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:49:02.690331 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:49:02.691587 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 23:49:02.691648 1469207 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 23:49:02.706075 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:49:02.714160 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 23:49:02.714235 1469207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 23:49:02.723319 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:49:02.733696 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:49:02.772383 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:49:02.772454 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 23:49:02.782365 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 23:49:02.782442 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 23:49:02.787205 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 23:49:02.787276 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 23:49:02.805774 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 23:49:02.805847 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 23:49:02.830174 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 23:49:02.830249 1469207 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 23:49:02.881777 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:49:02.881848 1469207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 23:49:02.907796 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 23:49:02.907877 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 23:49:02.913171 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:49:02.956583 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 23:49:02.956668 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 23:49:03.006805 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 23:49:03.006896 1469207 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 23:49:03.014541 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 23:49:03.014569 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 23:49:03.069505 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:49:03.071805 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 23:49:03.071876 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 23:49:03.120594 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 23:49:03.120621 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 23:49:03.155118 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 23:49:03.155144 1469207 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 23:49:03.207882 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:49:03.207911 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 23:49:03.209277 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 23:49:03.209299 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 23:49:03.227326 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 23:49:03.227352 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 23:49:03.285597 1469207 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:03.285622 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 23:49:03.315326 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:03.352797 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 23:49:03.352823 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 23:49:03.352974 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:49:03.353048 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 23:49:03.353059 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 23:49:03.446709 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 23:49:03.446736 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 23:49:03.457385 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 23:49:03.457412 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 23:49:03.569743 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 23:49:03.569771 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 23:49:03.578767 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 23:49:03.578793 1469207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 23:49:03.666559 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:49:03.666629 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 23:49:03.711708 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:49:03.720600 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 23:49:03.720670 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 23:49:03.799016 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 23:49:03.799083 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 23:49:03.929375 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:49:03.929450 1469207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 23:49:04.089253 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:49:04.861879 1469207 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.825832416s)
	I1001 23:49:04.862022 1469207 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 23:49:04.861978 1469207 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.659432373s)
	I1001 23:49:04.862840 1469207 node_ready.go:35] waiting up to 6m0s for node "addons-902832" to be "Ready" ...
	I1001 23:49:04.864080 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.447165411s)
	I1001 23:49:05.620732 1469207 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-902832" context rescaled to 1 replicas
	I1001 23:49:06.094829 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.481166142s)
	I1001 23:49:06.271623 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.618265498s)
	I1001 23:49:06.716737 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.026325174s)
	I1001 23:49:06.716841 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.010699949s)
	I1001 23:49:06.857937 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.134540013s)
	I1001 23:49:06.878165 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:07.643991 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.910162839s)
	I1001 23:49:07.644024 1469207 addons.go:475] Verifying addon ingress=true in "addons-902832"
	I1001 23:49:07.644222 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.730982993s)
	I1001 23:49:07.644241 1469207 addons.go:475] Verifying addon registry=true in "addons-902832"
	I1001 23:49:07.644527 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.574981329s)
	I1001 23:49:07.644547 1469207 addons.go:475] Verifying addon metrics-server=true in "addons-902832"
	I1001 23:49:07.647792 1469207 out.go:177] * Verifying registry addon...
	I1001 23:49:07.647877 1469207 out.go:177] * Verifying ingress addon...
	I1001 23:49:07.652272 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 23:49:07.653193 1469207 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 23:49:07.694387 1469207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 23:49:07.694420 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:07.695299 1469207 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 23:49:07.695319 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:07.897759 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.582384486s)
	W1001 23:49:07.897799 1469207 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:49:07.897821 1469207 retry.go:31] will retry after 301.384921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:49:07.897868 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.544878141s)
	I1001 23:49:07.898089 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.18630257s)
	I1001 23:49:07.902086 1469207 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-902832 service yakd-dashboard -n yakd-dashboard
	
	I1001 23:49:08.136197 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.046841038s)
	I1001 23:49:08.136232 1469207 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-902832"
	I1001 23:49:08.139010 1469207 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 23:49:08.142520 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 23:49:08.154031 1469207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 23:49:08.154053 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:08.158668 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:08.182952 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:08.200360 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:08.646596 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:08.708735 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:08.746151 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.148366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:09.159990 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.168972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:09.367218 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:09.645982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:09.656164 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.657299 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:10.147692 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:10.160511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:10.162501 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:10.646713 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:10.657410 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:10.658023 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.147035 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:11.160614 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:11.162386 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.227609 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.02720651s)
	I1001 23:49:11.647351 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:11.656257 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:11.658952 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.866128 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:11.888952 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 23:49:11.889043 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:11.906966 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:12.015475 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 23:49:12.035460 1469207 addons.go:234] Setting addon gcp-auth=true in "addons-902832"
	I1001 23:49:12.035520 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:12.035980 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:12.058360 1469207 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 23:49:12.058480 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:12.076020 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:12.146147 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:12.157837 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:12.158973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:12.185770 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:12.188464 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 23:49:12.191203 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 23:49:12.191248 1469207 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 23:49:12.222614 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 23:49:12.222648 1469207 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 23:49:12.241158 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:49:12.241187 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 23:49:12.277784 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:49:12.649079 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:12.657731 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:12.659207 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:12.904176 1469207 addons.go:475] Verifying addon gcp-auth=true in "addons-902832"
	I1001 23:49:12.907607 1469207 out.go:177] * Verifying gcp-auth addon...
	I1001 23:49:12.911050 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 23:49:12.921096 1469207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 23:49:12.921161 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:13.147127 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:13.155648 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:13.157261 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:13.415586 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:13.646880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:13.656014 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:13.657309 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:13.866600 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:13.914763 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:14.146973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:14.156676 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:14.157447 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:14.414625 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:14.646544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:14.656367 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:14.658598 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:14.914570 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:15.146747 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:15.156746 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:15.157844 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:15.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:15.646270 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:15.655130 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:15.657923 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:15.914895 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:16.146284 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:16.155472 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:16.156900 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:16.366357 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:16.414430 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:16.645941 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:16.656747 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:16.657640 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:16.914785 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:17.146542 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:17.155587 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:17.157834 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:17.414664 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:17.646770 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:17.655530 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:17.658496 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:17.915101 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:18.146614 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:18.156272 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:18.157504 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:18.366482 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:18.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:18.646151 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:18.656855 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:18.657106 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:18.914669 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:19.146308 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:19.155752 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:19.158879 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:19.414366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:19.646284 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:19.655457 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:19.657524 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:19.914630 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:20.146301 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:20.156324 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:20.157658 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:20.414653 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:20.646207 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:20.655635 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:20.657216 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:20.866712 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:20.914712 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:21.146366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:21.156672 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:21.158051 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:21.414315 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:21.645915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:21.655526 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:21.657565 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:21.915049 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:22.146781 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:22.156097 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:22.157425 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:22.414025 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:22.646109 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:22.657787 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:22.658505 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:22.915354 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:23.146293 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:23.156565 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:23.157565 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:23.366783 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:23.414355 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:23.646379 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:23.656106 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:23.656849 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:23.915216 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:24.146521 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:24.156345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:24.157139 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:24.414498 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:24.646147 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:24.655907 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:24.658962 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:24.914167 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:25.147265 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:25.158447 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:25.159250 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:25.414120 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:25.647051 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:25.656668 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:25.657404 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:25.866822 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:25.914999 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:26.146124 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:26.155705 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:26.156966 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:26.416759 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:26.646237 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:26.655325 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:26.657701 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:26.914881 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:27.146321 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:27.159982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:27.161097 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:27.414497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:27.646624 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:27.656561 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:27.657647 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:27.916737 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:28.145750 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:28.155653 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:28.158422 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:28.366730 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:28.414610 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:28.647216 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:28.655306 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:28.657001 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:28.915205 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:29.146324 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:29.155211 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:29.156791 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:29.414684 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:29.646511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:29.655695 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:29.656900 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:29.914577 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:30.146918 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:30.156710 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:30.157936 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:30.414789 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:30.646180 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:30.655704 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:30.657191 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:30.866459 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:30.914952 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:31.146407 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:31.155123 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:31.157072 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:31.414445 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:31.646621 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:31.656707 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:31.657664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:31.915112 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:32.146860 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:32.156365 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:32.157168 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:32.415336 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:32.645805 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:32.656349 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:32.657097 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:32.866639 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:32.914992 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:33.146452 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:33.155532 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:33.156744 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:33.414225 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:33.646504 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:33.656135 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:33.656925 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:33.914452 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:34.146138 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:34.155839 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:34.157607 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:34.414652 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:34.646844 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:34.656138 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:34.656939 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:34.914511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:35.146816 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:35.156847 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:35.158093 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:35.366216 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:35.414381 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:35.647446 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:35.656271 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:35.656984 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:35.915237 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:36.145890 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:36.156547 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:36.157164 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:36.414880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:36.646938 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:36.656688 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:36.657258 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:36.914592 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:37.146477 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:37.156491 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:37.156991 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:37.366998 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:37.414885 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:37.645884 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:37.655298 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:37.656801 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:37.914330 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:38.145778 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:38.157237 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:38.157402 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:38.414620 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:38.646479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:38.656345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:38.656985 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:38.915023 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:39.146084 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:39.157044 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:39.157894 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:39.414642 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:39.647086 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:39.655346 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:39.657540 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:39.865921 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:39.915582 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:40.146612 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:40.155880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:40.157958 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:40.415224 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:40.647717 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:40.656483 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:40.657158 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:40.914741 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:41.146861 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:41.157235 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:41.158062 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:41.414658 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:41.646468 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:41.656630 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:41.657014 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:41.866158 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:41.914912 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:42.146622 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:42.156194 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:42.158595 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:42.414297 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:42.645861 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:42.656511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:42.657362 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:42.915005 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:43.145885 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:43.157399 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:43.157562 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:43.414999 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:43.646092 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:43.656695 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:43.657856 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:43.866233 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:43.914243 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:44.146337 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:44.155982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:44.157201 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:44.414181 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:44.646915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:44.655352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:44.657963 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:44.914010 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:45.147829 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:45.158222 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:45.158450 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:45.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:45.646846 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:45.655633 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:45.657389 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:45.879353 1469207 node_ready.go:49] node "addons-902832" has status "Ready":"True"
	I1001 23:49:45.879387 1469207 node_ready.go:38] duration metric: took 41.016524571s for node "addons-902832" to be "Ready" ...
	I1001 23:49:45.879397 1469207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:49:45.892293 1469207 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:45.949812 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:46.161823 1469207 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 23:49:46.161851 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:46.166737 1469207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 23:49:46.166764 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:46.167588 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:46.450375 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:46.647554 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:46.659238 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:46.660217 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:46.916226 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:47.152263 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:47.170016 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:47.171363 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:47.416345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:47.647962 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:47.669055 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:47.750674 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:47.905341 1469207 pod_ready.go:103] pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:47.920837 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.151352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:48.248569 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:48.250009 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:48.415145 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.647845 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:48.657816 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:48.657991 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:48.901213 1469207 pod_ready.go:93] pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.901238 1469207 pod_ready.go:82] duration metric: took 3.008913715s for pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.901257 1469207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.906292 1469207 pod_ready.go:93] pod "etcd-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.906315 1469207 pod_ready.go:82] duration metric: took 5.028534ms for pod "etcd-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.906330 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.911306 1469207 pod_ready.go:93] pod "kube-apiserver-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.911330 1469207 pod_ready.go:82] duration metric: took 4.992826ms for pod "kube-apiserver-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.911341 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.915591 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.917416 1469207 pod_ready.go:93] pod "kube-controller-manager-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.917437 1469207 pod_ready.go:82] duration metric: took 6.088957ms for pod "kube-controller-manager-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.917451 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kx8p9" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.924005 1469207 pod_ready.go:93] pod "kube-proxy-kx8p9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.924031 1469207 pod_ready.go:82] duration metric: took 6.57235ms for pod "kube-proxy-kx8p9" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.924042 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.146988 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:49.155813 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:49.158741 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:49.296976 1469207 pod_ready.go:93] pod "kube-scheduler-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:49.297004 1469207 pod_ready.go:82] duration metric: took 372.952028ms for pod "kube-scheduler-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.297016 1469207 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.414104 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:49.646968 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:49.656090 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:49.658164 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:49.914772 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:50.147897 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:50.157406 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:50.158783 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:50.415634 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:50.648544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:50.658704 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:50.660769 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:50.914593 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:51.148394 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:51.157232 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:51.159726 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:51.304833 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:51.415652 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:51.653115 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:51.666618 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:51.667301 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:51.914854 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:52.148842 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:52.158821 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:52.159034 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:52.416115 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:52.662699 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:52.668163 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:52.669216 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:52.915065 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:53.150713 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:53.172951 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:53.175853 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:53.415574 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:53.647451 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:53.656454 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:53.658574 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:53.805896 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:53.915357 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:54.149539 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:54.159514 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:54.161060 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:54.414951 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:54.651017 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:54.665061 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:54.665703 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:54.914952 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:55.148832 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:55.158945 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:55.160572 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:55.415374 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:55.648326 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:55.658130 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:55.660947 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:55.809419 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:55.914767 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:56.147342 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:56.156920 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:56.158100 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:56.415063 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:56.648190 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:56.656377 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:56.658149 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:56.914551 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:57.148141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:57.157566 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:57.157942 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:57.416641 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:57.648140 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:57.660149 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:57.660640 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:57.915455 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:58.147559 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:58.160529 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:58.177002 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:58.304060 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:58.416031 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:58.648682 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:58.672050 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:58.674149 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:58.922172 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:59.147756 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:59.158577 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:59.159836 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:59.415625 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:59.648010 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:59.658223 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:59.659456 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:59.915004 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:00.164670 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:00.198528 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:00.199557 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:00.323921 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:00.416455 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:00.648232 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:00.657488 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:00.658744 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:00.915460 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:01.149252 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:01.157647 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:01.161103 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:01.414880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:01.648683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:01.659400 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:01.661221 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:01.916730 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:02.148495 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:02.159084 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:02.161966 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:02.416110 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:02.649265 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:02.664915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:02.748769 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:02.804883 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:02.916421 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:03.147313 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:03.157213 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:03.158392 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:03.414646 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:03.647627 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:03.655905 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:03.658797 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:03.915141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:04.147380 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:04.156766 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:04.158972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:04.415888 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:04.647693 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:04.656924 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:04.658768 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:04.809345 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:04.915536 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:05.148111 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:05.157358 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:05.158664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:05.414682 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:05.647809 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:05.655896 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:05.657025 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:05.915088 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:06.147142 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:06.159194 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:06.160717 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:06.414474 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:06.648792 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:06.659470 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:06.660449 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:06.816709 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:06.924927 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:07.147847 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:07.156479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:07.158110 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:07.416086 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:07.649971 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:07.660497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:07.661215 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:07.917144 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:08.148598 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:08.163674 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:08.165481 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:08.418083 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:08.650047 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:08.658937 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:08.661579 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:08.819092 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:08.917617 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:09.148748 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:09.159506 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:09.160510 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:09.415751 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:09.647839 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:09.665350 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:09.665763 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:09.915512 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:10.147961 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:10.157667 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:10.160397 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:10.415060 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:10.647433 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:10.661223 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:10.662534 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:10.915581 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:11.148260 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:11.155989 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:11.158681 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:11.304379 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:11.414867 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:11.647687 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:11.657131 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:11.658504 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:11.915081 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:12.147960 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:12.157334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:12.157586 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:12.415111 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:12.648110 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:12.657179 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:12.658063 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:12.934973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:13.148813 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:13.158799 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:13.160338 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:13.306985 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:13.415315 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:13.650667 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:13.664033 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:13.665775 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:13.915447 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:14.147595 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:14.155975 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:14.158334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:14.415579 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:14.647580 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:14.656231 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:14.658440 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:14.914862 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:15.147369 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:15.157895 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:15.158699 1469207 kapi.go:107] duration metric: took 1m7.506429472s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 23:50:15.414437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:15.647489 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:15.658110 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:15.804539 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:15.915255 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:16.151792 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:16.159105 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:16.415286 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:16.647966 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:16.658783 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:16.915199 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:17.148417 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:17.157823 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:17.415214 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:17.650098 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:17.664847 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:17.918027 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:18.148467 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:18.157448 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:18.306253 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:18.415686 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:18.648862 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:18.658263 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:18.916141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:19.148077 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:19.158531 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:19.415042 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:19.656155 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:19.664191 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:19.915215 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:20.150485 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:20.159489 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:20.306692 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:20.416029 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:20.648683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:20.658780 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:20.915562 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:21.148743 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:21.159591 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:21.415020 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:21.647382 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:21.658579 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:21.915282 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:22.148104 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:22.158971 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:22.415033 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:22.648479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:22.657004 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:22.803121 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:22.914699 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:23.148387 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:23.157593 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:23.414706 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:23.650946 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:23.657839 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:23.915700 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:24.149670 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:24.166450 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:24.416611 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:24.649440 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:24.658081 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:24.803281 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:24.916429 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:25.148608 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:25.159738 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:25.415701 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:25.649784 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:25.658050 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:25.916564 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:26.149974 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:26.159378 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:26.415058 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:26.647917 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:26.657883 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:26.803945 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:26.915612 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:27.149497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:27.158245 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:27.415206 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:27.653026 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:27.661771 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:27.916174 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:28.148878 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:28.158281 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:28.415095 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:28.651407 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:28.659622 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:28.811654 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:28.916428 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:29.152267 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:29.157544 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:29.414627 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:29.647256 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:29.657203 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:29.915438 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:30.147773 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:30.158782 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:30.415454 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:30.647025 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:30.658530 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:30.914437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:31.148247 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:31.158159 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:31.304123 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:31.414638 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:31.648536 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:31.659534 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:31.915314 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:32.154029 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:32.158784 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:32.426509 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:32.647574 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:32.657692 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:32.914786 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:33.169886 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:33.171478 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:33.309631 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:33.415410 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:33.648344 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:33.657972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:33.914461 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:34.153113 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:34.162749 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:34.414982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:34.648093 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:34.658159 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:34.915611 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:35.166664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:35.172492 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:35.414555 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:35.648740 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:35.657819 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:35.803951 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:35.914248 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:36.152004 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:36.158861 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:36.415520 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:36.647599 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:36.657612 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:36.915332 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:37.148176 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:37.157950 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:37.416366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:37.649728 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:37.657588 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:37.804876 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:37.915097 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:38.152472 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:38.160034 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:38.415437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:38.647342 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:38.658089 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:38.915576 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:39.148215 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:39.157992 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:39.418560 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:39.647390 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:39.658942 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:39.915061 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:40.150209 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:40.157407 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:40.308511 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:40.415845 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:40.648038 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:40.658402 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:40.915759 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:41.147726 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:41.157654 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:41.415228 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:41.648140 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:41.658279 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:41.914645 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:42.156214 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:42.160082 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:42.421189 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:42.648758 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:42.658308 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:42.803644 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:42.915072 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:43.153411 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:43.158099 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:43.415420 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:43.646990 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:43.657924 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:43.914761 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:44.148044 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:44.158238 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:44.415187 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:44.648942 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:44.660232 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:44.804205 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:44.916058 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:45.149527 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:45.163165 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:45.418421 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:45.647534 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:45.657643 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:45.914858 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:46.147820 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:46.160817 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:46.417903 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:46.648262 1469207 kapi.go:107] duration metric: took 1m38.505738227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 23:50:46.658108 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:46.804744 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:46.914514 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:47.157694 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:47.415994 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:47.658095 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:47.914584 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:48.158389 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:48.415256 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:48.657625 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:48.915362 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:49.158029 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:49.303497 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:49.415513 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:49.657642 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:49.915661 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:50.157349 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:50.415426 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:50.658855 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:50.914765 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:51.157846 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:51.303610 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:51.414352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:51.657963 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:51.914735 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:52.157977 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:52.414673 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:52.657939 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:52.915641 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:53.158517 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:53.306028 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:53.415683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:53.658962 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:53.914720 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:54.157807 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:54.415829 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:54.658682 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:54.916087 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:55.157738 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:55.415052 1469207 kapi.go:107] duration metric: took 1m42.503997077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 23:50:55.417475 1469207 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-902832 cluster.
	I1001 23:50:55.419895 1469207 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 23:50:55.422370 1469207 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 23:50:55.659778 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:55.805090 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:56.163037 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:56.658954 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:57.159167 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:57.657881 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:58.158821 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:58.302897 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:58.657715 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:59.159350 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:59.663334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:00.180356 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:00.306853 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:00.659762 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:01.159263 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:01.659375 1469207 kapi.go:107] duration metric: took 1m54.006175965s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 23:51:01.662952 1469207 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1001 23:51:01.665528 1469207 addons.go:510] duration metric: took 2m0.026921338s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1001 23:51:02.803550 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:04.807919 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:07.303634 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:09.304093 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:11.803740 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:14.303218 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:16.303306 1469207 pod_ready.go:93] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"True"
	I1001 23:51:16.303381 1469207 pod_ready.go:82] duration metric: took 1m27.006354091s for pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.303400 1469207 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.308891 1469207 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace has status "Ready":"True"
	I1001 23:51:16.308918 1469207 pod_ready.go:82] duration metric: took 5.507726ms for pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.308940 1469207 pod_ready.go:39] duration metric: took 1m30.429530912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:51:16.308957 1469207 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:51:16.308992 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:16.309056 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:16.364413 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:16.364488 1469207 cri.go:89] found id: ""
	I1001 23:51:16.364512 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:16.364604 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.368371 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:16.368448 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:16.413247 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:16.413270 1469207 cri.go:89] found id: ""
	I1001 23:51:16.413278 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:16.413359 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.416842 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:16.416958 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:16.460115 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:16.460138 1469207 cri.go:89] found id: ""
	I1001 23:51:16.460146 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:16.460202 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.463786 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:16.463861 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:16.510372 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:16.510396 1469207 cri.go:89] found id: ""
	I1001 23:51:16.510404 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:16.510474 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.515168 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:16.515312 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:16.564144 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:16.564172 1469207 cri.go:89] found id: ""
	I1001 23:51:16.564180 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:16.564247 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.568189 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:16.568258 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:16.610285 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:16.610313 1469207 cri.go:89] found id: ""
	I1001 23:51:16.610321 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:16.610387 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.614026 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:16.614099 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:16.656668 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:16.656690 1469207 cri.go:89] found id: ""
	I1001 23:51:16.656698 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:16.656819 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.660678 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:16.660752 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:16.706396 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:16.706427 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:16.761613 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:16.761649 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:16.861708 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:16.861742 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:16.878452 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:16.878491 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:16.924996 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:16.925030 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:16.992363 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:16.992401 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:17.037242 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:17.037272 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:17.118783 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:17.118831 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:17.160367 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:17.160397 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:17.221184 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:17.221214 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:17.286598 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.286852 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287041 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.287309 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287500 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.287731 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287924 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288150 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.288332 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288554 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.288737 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288960 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.289139 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.289363 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.289535 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.289747 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:17.329800 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:17.329832 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:17.550248 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:17.550282 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:17.550352 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:17.550367 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.550374 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.550382 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.550388 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.550394 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:17.550516 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:17.550532 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:27.551603 1469207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:51:27.565363 1469207 api_server.go:72] duration metric: took 2m25.927024032s to wait for apiserver process to appear ...
	I1001 23:51:27.565388 1469207 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:51:27.565423 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:27.565482 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:27.606037 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:27.606059 1469207 cri.go:89] found id: ""
	I1001 23:51:27.606067 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:27.606126 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.609639 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:27.609710 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:27.647251 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:27.647276 1469207 cri.go:89] found id: ""
	I1001 23:51:27.647284 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:27.647344 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.650919 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:27.650990 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:27.690339 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:27.690371 1469207 cri.go:89] found id: ""
	I1001 23:51:27.690379 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:27.690436 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.694002 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:27.694101 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:27.737387 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:27.737417 1469207 cri.go:89] found id: ""
	I1001 23:51:27.737427 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:27.737494 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.741134 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:27.741209 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:27.781872 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:27.781893 1469207 cri.go:89] found id: ""
	I1001 23:51:27.781900 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:27.781955 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.785422 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:27.785497 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:27.824617 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:27.824639 1469207 cri.go:89] found id: ""
	I1001 23:51:27.824647 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:27.824704 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.828268 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:27.828338 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:27.869395 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:27.869467 1469207 cri.go:89] found id: ""
	I1001 23:51:27.869483 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:27.869556 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.873017 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:27.873045 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:27.889954 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:27.889983 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:28.024160 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:28.024193 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:28.092269 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:28.092303 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:28.141257 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:28.141294 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:28.234950 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:28.234989 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:28.307921 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:28.307952 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:28.375164 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.375422 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.375611 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.375842 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376033 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.376260 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376452 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.376678 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376859 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377082 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.377263 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377492 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.377671 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377892 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.378065 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.378278 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:28.418958 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:28.418986 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:28.472368 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:28.472406 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:28.516665 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:28.516694 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:28.554784 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:28.554819 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:28.646942 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:28.646975 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:28.694269 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:28.694348 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:28.694427 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:28.694585 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.694640 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.694763 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.694814 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.694848 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:28.694882 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:28.694906 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:38.696816 1469207 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 23:51:38.705548 1469207 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 23:51:38.706569 1469207 api_server.go:141] control plane version: v1.31.1
	I1001 23:51:38.706597 1469207 api_server.go:131] duration metric: took 11.141200418s to wait for apiserver health ...
	I1001 23:51:38.706617 1469207 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:51:38.706638 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:38.706704 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:38.747460 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:38.747486 1469207 cri.go:89] found id: ""
	I1001 23:51:38.747493 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:38.747549 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.751078 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:38.751156 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:38.789076 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:38.789100 1469207 cri.go:89] found id: ""
	I1001 23:51:38.789108 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:38.789199 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.792673 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:38.792775 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:38.834449 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:38.834472 1469207 cri.go:89] found id: ""
	I1001 23:51:38.834480 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:38.834539 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.837974 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:38.838054 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:38.876387 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:38.876411 1469207 cri.go:89] found id: ""
	I1001 23:51:38.876419 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:38.876472 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.881038 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:38.881128 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:38.919558 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:38.919577 1469207 cri.go:89] found id: ""
	I1001 23:51:38.919584 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:38.919641 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.923618 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:38.923694 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:38.960505 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:38.960523 1469207 cri.go:89] found id: ""
	I1001 23:51:38.960531 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:38.960594 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.964424 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:38.964494 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:39.003753 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:39.003828 1469207 cri.go:89] found id: ""
	I1001 23:51:39.003852 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:39.003939 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:39.009596 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:39.009623 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:39.080079 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:39.080120 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:39.130950 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:39.130976 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:39.193927 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:39.193959 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:39.256642 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.256899 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257092 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.257324 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257512 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.257741 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257930 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258159 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.258341 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258564 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.258751 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258976 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.259155 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.259422 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.259598 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.259815 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:39.301276 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:39.301305 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:39.446748 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:39.446785 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:39.504821 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:39.504855 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:39.547657 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:39.547686 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:39.644920 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:39.644956 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:39.661786 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:39.661817 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:39.720069 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:39.720103 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:39.767501 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:39.767530 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:39.831378 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:39.831411 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:39.831493 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:39.831509 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.831533 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.831544 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.831552 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.831557 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:39.831599 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:39.831608 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:49.844865 1469207 system_pods.go:59] 18 kube-system pods found
	I1001 23:51:49.844902 1469207 system_pods.go:61] "coredns-7c65d6cfc9-xljjm" [e0ad2956-c010-4fc7-b0d8-4d32b01451d8] Running
	I1001 23:51:49.844908 1469207 system_pods.go:61] "csi-hostpath-attacher-0" [ac852b7e-ae3b-469b-8187-4e7defd56346] Running
	I1001 23:51:49.844913 1469207 system_pods.go:61] "csi-hostpath-resizer-0" [c3e31778-df3b-462e-a4be-109b7954b782] Running
	I1001 23:51:49.844917 1469207 system_pods.go:61] "csi-hostpathplugin-65tpx" [a4743192-4d2a-4c3a-8ee9-46fad74b784b] Running
	I1001 23:51:49.844921 1469207 system_pods.go:61] "etcd-addons-902832" [29071b69-21dc-4c9b-b469-4d667f3eaad8] Running
	I1001 23:51:49.844925 1469207 system_pods.go:61] "kindnet-frb7r" [ab2734fa-ca9d-47b1-a3d9-d34e0e0fb55f] Running
	I1001 23:51:49.844928 1469207 system_pods.go:61] "kube-apiserver-addons-902832" [b9f460d1-7581-4b09-8b2e-646bd2a89859] Running
	I1001 23:51:49.844932 1469207 system_pods.go:61] "kube-controller-manager-addons-902832" [f0e7e114-9900-415d-a36b-c19f1ccb1e4e] Running
	I1001 23:51:49.844936 1469207 system_pods.go:61] "kube-ingress-dns-minikube" [3f10c5a6-50e8-49a4-8cad-a06c995525bd] Running
	I1001 23:51:49.844940 1469207 system_pods.go:61] "kube-proxy-kx8p9" [8619925a-3b0d-41d1-847a-23f287f14b34] Running
	I1001 23:51:49.844944 1469207 system_pods.go:61] "kube-scheduler-addons-902832" [e29eb860-afff-44b0-8e7d-717180fbff55] Running
	I1001 23:51:49.844948 1469207 system_pods.go:61] "metrics-server-84c5f94fbc-78xch" [9a1268e4-5691-4653-93b1-c7a18c5734b5] Running
	I1001 23:51:49.844952 1469207 system_pods.go:61] "nvidia-device-plugin-daemonset-zz9mg" [18ac45a3-6b0c-4535-a78d-cc801c2d3d20] Running
	I1001 23:51:49.844956 1469207 system_pods.go:61] "registry-66c9cd494c-wt4tb" [89b4caf4-80a6-4169-98c5-1a6ccdd606c0] Running
	I1001 23:51:49.844960 1469207 system_pods.go:61] "registry-proxy-8h2cr" [de013b46-27a0-473a-9c80-20d0ffeaaa75] Running
	I1001 23:51:49.844964 1469207 system_pods.go:61] "snapshot-controller-56fcc65765-6sfbh" [6ab5415b-4d25-411c-b95c-4c348f8b8b01] Running
	I1001 23:51:49.844969 1469207 system_pods.go:61] "snapshot-controller-56fcc65765-8d7bz" [42ef9a62-c0ee-4ed2-8516-18421d7e01bf] Running
	I1001 23:51:49.844973 1469207 system_pods.go:61] "storage-provisioner" [5d5990fa-0392-44eb-af89-06f613fee5f9] Running
	I1001 23:51:49.844979 1469207 system_pods.go:74] duration metric: took 11.138355663s to wait for pod list to return data ...
	I1001 23:51:49.844993 1469207 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:51:49.847912 1469207 default_sa.go:45] found service account: "default"
	I1001 23:51:49.847937 1469207 default_sa.go:55] duration metric: took 2.937645ms for default service account to be created ...
	I1001 23:51:49.847946 1469207 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:51:49.858831 1469207 system_pods.go:86] 18 kube-system pods found
	I1001 23:51:49.858868 1469207 system_pods.go:89] "coredns-7c65d6cfc9-xljjm" [e0ad2956-c010-4fc7-b0d8-4d32b01451d8] Running
	I1001 23:51:49.858876 1469207 system_pods.go:89] "csi-hostpath-attacher-0" [ac852b7e-ae3b-469b-8187-4e7defd56346] Running
	I1001 23:51:49.858881 1469207 system_pods.go:89] "csi-hostpath-resizer-0" [c3e31778-df3b-462e-a4be-109b7954b782] Running
	I1001 23:51:49.858886 1469207 system_pods.go:89] "csi-hostpathplugin-65tpx" [a4743192-4d2a-4c3a-8ee9-46fad74b784b] Running
	I1001 23:51:49.858890 1469207 system_pods.go:89] "etcd-addons-902832" [29071b69-21dc-4c9b-b469-4d667f3eaad8] Running
	I1001 23:51:49.858897 1469207 system_pods.go:89] "kindnet-frb7r" [ab2734fa-ca9d-47b1-a3d9-d34e0e0fb55f] Running
	I1001 23:51:49.858901 1469207 system_pods.go:89] "kube-apiserver-addons-902832" [b9f460d1-7581-4b09-8b2e-646bd2a89859] Running
	I1001 23:51:49.858906 1469207 system_pods.go:89] "kube-controller-manager-addons-902832" [f0e7e114-9900-415d-a36b-c19f1ccb1e4e] Running
	I1001 23:51:49.858911 1469207 system_pods.go:89] "kube-ingress-dns-minikube" [3f10c5a6-50e8-49a4-8cad-a06c995525bd] Running
	I1001 23:51:49.858915 1469207 system_pods.go:89] "kube-proxy-kx8p9" [8619925a-3b0d-41d1-847a-23f287f14b34] Running
	I1001 23:51:49.858921 1469207 system_pods.go:89] "kube-scheduler-addons-902832" [e29eb860-afff-44b0-8e7d-717180fbff55] Running
	I1001 23:51:49.858925 1469207 system_pods.go:89] "metrics-server-84c5f94fbc-78xch" [9a1268e4-5691-4653-93b1-c7a18c5734b5] Running
	I1001 23:51:49.858930 1469207 system_pods.go:89] "nvidia-device-plugin-daemonset-zz9mg" [18ac45a3-6b0c-4535-a78d-cc801c2d3d20] Running
	I1001 23:51:49.858939 1469207 system_pods.go:89] "registry-66c9cd494c-wt4tb" [89b4caf4-80a6-4169-98c5-1a6ccdd606c0] Running
	I1001 23:51:49.858951 1469207 system_pods.go:89] "registry-proxy-8h2cr" [de013b46-27a0-473a-9c80-20d0ffeaaa75] Running
	I1001 23:51:49.858959 1469207 system_pods.go:89] "snapshot-controller-56fcc65765-6sfbh" [6ab5415b-4d25-411c-b95c-4c348f8b8b01] Running
	I1001 23:51:49.858963 1469207 system_pods.go:89] "snapshot-controller-56fcc65765-8d7bz" [42ef9a62-c0ee-4ed2-8516-18421d7e01bf] Running
	I1001 23:51:49.858967 1469207 system_pods.go:89] "storage-provisioner" [5d5990fa-0392-44eb-af89-06f613fee5f9] Running
	I1001 23:51:49.858975 1469207 system_pods.go:126] duration metric: took 11.022846ms to wait for k8s-apps to be running ...
	I1001 23:51:49.858987 1469207 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:51:49.859049 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:51:49.872588 1469207 system_svc.go:56] duration metric: took 13.590851ms WaitForService to wait for kubelet
	I1001 23:51:49.872618 1469207 kubeadm.go:582] duration metric: took 2m48.234283937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:51:49.872637 1469207 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:51:49.876175 1469207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 23:51:49.876222 1469207 node_conditions.go:123] node cpu capacity is 2
	I1001 23:51:49.876245 1469207 node_conditions.go:105] duration metric: took 3.597519ms to run NodePressure ...
	I1001 23:51:49.876258 1469207 start.go:241] waiting for startup goroutines ...
	I1001 23:51:49.876271 1469207 start.go:246] waiting for cluster config update ...
	I1001 23:51:49.876288 1469207 start.go:255] writing updated cluster config ...
	I1001 23:51:49.876602 1469207 ssh_runner.go:195] Run: rm -f paused
	I1001 23:51:50.270094 1469207 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:51:50.272431 1469207 out.go:177] * Done! kubectl is now configured to use "addons-902832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:02:42 addons-902832 crio[964]: time="2024-10-02 00:02:42.953538810Z" level=info msg="Started container" PID=14209 containerID=c5e8e938acd485b37bfa2c0d1d57602e8a99465f0456df295e629b3a80b6b401 description=default/busybox/busybox id=457e7d85-2939-4830-8873-e107e07a7546 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c41d3fbd2ba3334342cf65857978dd79ce22d48ae39b9c8e9700bd7fd022388d
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.727424676Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-27hwm/POD" id=3b13585a-b9bc-4ef9-93cd-6843077a3711 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.727494246Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.753074137Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-27hwm Namespace:default ID:c302ae9ca8d209653acdb3818f75f4671a44a5065601e3bf83c14d009e109f5f UID:3423aef7-6acf-4582-8f38-7b45a79b8a28 NetNS:/var/run/netns/282fa298-6a92-4701-989c-1b10a21d3d19 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.753126665Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-27hwm to CNI network \"kindnet\" (type=ptp)"
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.771060874Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-27hwm Namespace:default ID:c302ae9ca8d209653acdb3818f75f4671a44a5065601e3bf83c14d009e109f5f UID:3423aef7-6acf-4582-8f38-7b45a79b8a28 NetNS:/var/run/netns/282fa298-6a92-4701-989c-1b10a21d3d19 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.771266736Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-27hwm for CNI network kindnet (type=ptp)"
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.773905187Z" level=info msg="Ran pod sandbox c302ae9ca8d209653acdb3818f75f4671a44a5065601e3bf83c14d009e109f5f with infra container: default/hello-world-app-55bf9c44b4-27hwm/POD" id=3b13585a-b9bc-4ef9-93cd-6843077a3711 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.775515651Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d7603321-0b39-4b8e-9b7e-9554b3f94afe name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.775760002Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d7603321-0b39-4b8e-9b7e-9554b3f94afe name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.777870524Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bce3ea03-fc89-4e04-91e5-7ce632378713 name=/runtime.v1.ImageService/PullImage
	Oct 02 00:04:06 addons-902832 crio[964]: time="2024-10-02 00:04:06.780468607Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.042697198Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.851410504Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=bce3ea03-fc89-4e04-91e5-7ce632378713 name=/runtime.v1.ImageService/PullImage
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.851911595Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=51c563bd-c330-408f-8eda-b7789d945baa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.852532364Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=51c563bd-c330-408f-8eda-b7789d945baa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.853249352Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a82db313-8824-4566-ad4e-4d7834ae5fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.853824296Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a82db313-8824-4566-ad4e-4d7834ae5fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.854494073Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-27hwm/hello-world-app" id=a4064513-e10e-4f24-a2a6-112a8c5f833d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.854587560Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.882566336Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ef24791c4ae9553fb96e3348e06a1e7eb8d9b0e81e93ae1588b4058ee68048d7/merged/etc/passwd: no such file or directory"
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.882611520Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ef24791c4ae9553fb96e3348e06a1e7eb8d9b0e81e93ae1588b4058ee68048d7/merged/etc/group: no such file or directory"
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.945034575Z" level=info msg="Created container b22bf3c114895c8602211ba42cf6eed43ef83e7e626d56e9fcf6840adf577912: default/hello-world-app-55bf9c44b4-27hwm/hello-world-app" id=a4064513-e10e-4f24-a2a6-112a8c5f833d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.947717382Z" level=info msg="Starting container: b22bf3c114895c8602211ba42cf6eed43ef83e7e626d56e9fcf6840adf577912" id=ae5a065b-acd5-45b2-9e21-8e17b64dcfc0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 00:04:07 addons-902832 crio[964]: time="2024-10-02 00:04:07.961706266Z" level=info msg="Started container" PID=14400 containerID=b22bf3c114895c8602211ba42cf6eed43ef83e7e626d56e9fcf6840adf577912 description=default/hello-world-app-55bf9c44b4-27hwm/hello-world-app id=ae5a065b-acd5-45b2-9e21-8e17b64dcfc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c302ae9ca8d209653acdb3818f75f4671a44a5065601e3bf83c14d009e109f5f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	b22bf3c114895       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   c302ae9ca8d20       hello-world-app-55bf9c44b4-27hwm
	c5e8e938acd48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          About a minute ago       Running             busybox                   0                   c41d3fbd2ba33       busybox
	efbdf0c5ab72f       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago            Running             nginx                     0                   0b7cbc606bdee       nginx
	d91bc54e80c0c       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             13 minutes ago           Running             controller                0                   1e1a56981195c       ingress-nginx-controller-bc57996ff-4zl65
	bcfa00852ce42       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago           Exited              patch                     2                   d90c21a180c2a       ingress-nginx-admission-patch-vnf2m
	fb83cab1649d5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              create                    0                   b05db13c86291       ingress-nginx-admission-create-zbclz
	d19b81e8f59f8       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        14 minutes ago           Running             metrics-server            0                   ff26a80df1086       metrics-server-84c5f94fbc-78xch
	57ccfa3f6360f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             14 minutes ago           Running             minikube-ingress-dns      0                   8e8e0324c0558       kube-ingress-dns-minikube
	6b659db8e497d       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             14 minutes ago           Running             coredns                   0                   d5318fa4e0e8d       coredns-7c65d6cfc9-xljjm
	3364809d715c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago           Running             storage-provisioner       0                   3685e59a1c422       storage-provisioner
	51eadcd4b4318       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             15 minutes ago           Running             kindnet-cni               0                   a7295f4fba74d       kindnet-frb7r
	18f058a3c9bdb       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             15 minutes ago           Running             kube-proxy                0                   58c2c597c44bd       kube-proxy-kx8p9
	ea09c31646705       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             15 minutes ago           Running             kube-apiserver            0                   01d76498457eb       kube-apiserver-addons-902832
	1960bfd78af26       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             15 minutes ago           Running             kube-controller-manager   0                   d6cbe00b0bfa9       kube-controller-manager-addons-902832
	294331fdf9590       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             15 minutes ago           Running             kube-scheduler            0                   6c609d281447f       kube-scheduler-addons-902832
	2ba267277f9df       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             15 minutes ago           Running             etcd                      0                   44caa6b3912c3       etcd-addons-902832
	
	
	==> coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] <==
	[INFO] 10.244.0.5:59079 - 58662 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002881828s
	[INFO] 10.244.0.5:59079 - 40874 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000148813s
	[INFO] 10.244.0.5:59079 - 29569 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000154195s
	[INFO] 10.244.0.5:52795 - 23285 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137178s
	[INFO] 10.244.0.5:52795 - 22808 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047326s
	[INFO] 10.244.0.5:56789 - 57883 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083969s
	[INFO] 10.244.0.5:56789 - 57702 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080154s
	[INFO] 10.244.0.5:39898 - 49589 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007898s
	[INFO] 10.244.0.5:39898 - 49160 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110233s
	[INFO] 10.244.0.5:52527 - 23968 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001645327s
	[INFO] 10.244.0.5:52527 - 24141 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001713863s
	[INFO] 10.244.0.5:59762 - 10211 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079949s
	[INFO] 10.244.0.5:59762 - 9799 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159045s
	[INFO] 10.244.0.19:59271 - 34053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176045s
	[INFO] 10.244.0.19:42044 - 3374 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008104s
	[INFO] 10.244.0.19:38013 - 63294 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164165s
	[INFO] 10.244.0.19:60319 - 11482 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102906s
	[INFO] 10.244.0.19:44254 - 7013 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151389s
	[INFO] 10.244.0.19:58740 - 11978 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143579s
	[INFO] 10.244.0.19:34878 - 10917 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002052859s
	[INFO] 10.244.0.19:36367 - 45597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002501922s
	[INFO] 10.244.0.19:50577 - 27606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002298719s
	[INFO] 10.244.0.19:59393 - 627 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003655143s
	[INFO] 10.244.0.23:39741 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201816s
	[INFO] 10.244.0.23:51208 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164074s
	
	
	==> describe nodes <==
	Name:               addons-902832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-902832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-902832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_48_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-902832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-902832
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:04:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:03:03 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:03:03 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:03:03 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:03:03 +0000   Tue, 01 Oct 2024 23:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-902832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 61bc9dacefd548c8b2fdd23884b39f6c
	  System UUID:                0a0a3c90-92d5-433f-a6ea-4aa243645a16
	  Boot ID:                    9260520d-e63f-40a7-a450-76e3284bd194
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-27hwm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-4zl65    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-xljjm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-addons-902832                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-frb7r                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-902832                250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-902832       200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-kx8p9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-902832                100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-78xch             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node addons-902832 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node addons-902832 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node addons-902832 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-902832 event: Registered Node addons-902832 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-902832 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] <==
	{"level":"info","ts":"2024-10-01T23:48:51.643207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:48:51.643318Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:48:51.643386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:48:51.643487Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:48:51.643542Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:49:02.339621Z","caller":"traceutil/trace.go:171","msg":"trace[1362621792] linearizableReadLoop","detail":"{readStateIndex:342; appliedIndex:341; }","duration":"153.781877ms","start":"2024-10-01T23:49:02.185821Z","end":"2024-10-01T23:49:02.339602Z","steps":["trace[1362621792] 'read index received'  (duration: 114.918247ms)","trace[1362621792] 'applied index is now lower than readState.Index'  (duration: 38.862998ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T23:49:02.348430Z","caller":"traceutil/trace.go:171","msg":"trace[1602327104] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"232.522348ms","start":"2024-10-01T23:49:02.115875Z","end":"2024-10-01T23:49:02.348397Z","steps":["trace[1602327104] 'process raft request'  (duration: 184.858039ms)","trace[1602327104] 'compare'  (duration: 38.779127ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T23:49:02.355921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.982777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T23:49:02.367582Z","caller":"traceutil/trace.go:171","msg":"trace[1751064291] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:334; }","duration":"181.751338ms","start":"2024-10-01T23:49:02.185817Z","end":"2024-10-01T23:49:02.367568Z","steps":["trace[1751064291] 'agreement among raft nodes before linearized reading'  (duration: 169.955709ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:02.376838Z","caller":"traceutil/trace.go:171","msg":"trace[299981994] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"127.208659ms","start":"2024-10-01T23:49:02.249592Z","end":"2024-10-01T23:49:02.376800Z","steps":["trace[299981994] 'process raft request'  (duration: 117.913615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T23:49:02.612925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.163215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-frb7r\" ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2024-10-01T23:49:02.614732Z","caller":"traceutil/trace.go:171","msg":"trace[581668595] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-frb7r; range_end:; response_count:1; response_revision:340; }","duration":"145.980731ms","start":"2024-10-01T23:49:02.468738Z","end":"2024-10-01T23:49:02.614719Z","steps":["trace[581668595] 'agreement among raft nodes before linearized reading'  (duration: 144.117448ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:02.643702Z","caller":"traceutil/trace.go:171","msg":"trace[1591692366] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"105.588128ms","start":"2024-10-01T23:49:02.533069Z","end":"2024-10-01T23:49:02.638657Z","steps":["trace[1591692366] 'process raft request'  (duration: 79.736435ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:03.226879Z","caller":"traceutil/trace.go:171","msg":"trace[192823060] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"154.228185ms","start":"2024-10-01T23:49:03.072627Z","end":"2024-10-01T23:49:03.226855Z","steps":["trace[192823060] 'process raft request'  (duration: 153.946009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:04.752034Z","caller":"traceutil/trace.go:171","msg":"trace[1215227702] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"100.770764ms","start":"2024-10-01T23:49:04.651245Z","end":"2024-10-01T23:49:04.752015Z","steps":["trace[1215227702] 'process raft request'  (duration: 81.493276ms)","trace[1215227702] 'compare'  (duration: 18.592196ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T23:49:05.602194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.270684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T23:49:05.602249Z","caller":"traceutil/trace.go:171","msg":"trace[756978685] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:373; }","duration":"131.342953ms","start":"2024-10-01T23:49:05.470893Z","end":"2024-10-01T23:49:05.602236Z","steps":["trace[756978685] 'agreement among raft nodes before linearized reading'  (duration: 131.247587ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T23:49:05.602453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.598266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-10-01T23:49:05.602481Z","caller":"traceutil/trace.go:171","msg":"trace[756940475] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:373; }","duration":"131.627541ms","start":"2024-10-01T23:49:05.470847Z","end":"2024-10-01T23:49:05.602475Z","steps":["trace[756940475] 'agreement among raft nodes before linearized reading'  (duration: 131.567407ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:58:52.019225Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-10-01T23:58:52.051092Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1502,"took":"31.35509ms","hash":1230122647,"current-db-size-bytes":6225920,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-01T23:58:52.051142Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1230122647,"revision":1502,"compact-revision":-1}
	{"level":"info","ts":"2024-10-02T00:03:52.025517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1916}
	{"level":"info","ts":"2024-10-02T00:03:52.043649Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1916,"took":"17.508938ms","hash":3762113891,"current-db-size-bytes":6225920,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4333568,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-10-02T00:03:52.043704Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3762113891,"revision":1916,"compact-revision":1502}
	
	
	==> kernel <==
	 00:04:08 up  5:46,  0 users,  load average: 0.97, 0.73, 1.42
	Linux addons-902832 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] <==
	I1002 00:02:05.361179       1 main.go:299] handling current node
	I1002 00:02:15.367548       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:02:15.367592       1 main.go:299] handling current node
	I1002 00:02:25.368158       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:02:25.368196       1 main.go:299] handling current node
	I1002 00:02:35.362783       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:02:35.362895       1 main.go:299] handling current node
	I1002 00:02:45.361164       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:02:45.361290       1 main.go:299] handling current node
	I1002 00:02:55.362760       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:02:55.362804       1 main.go:299] handling current node
	I1002 00:03:05.360705       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:05.360740       1 main.go:299] handling current node
	I1002 00:03:15.366369       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:15.366403       1 main.go:299] handling current node
	I1002 00:03:25.369691       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:25.369727       1 main.go:299] handling current node
	I1002 00:03:35.360800       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:35.360837       1 main.go:299] handling current node
	I1002 00:03:45.360756       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:45.360894       1 main.go:299] handling current node
	I1002 00:03:55.365934       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:03:55.366124       1 main.go:299] handling current node
	I1002 00:04:05.361321       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:04:05.361354       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] <==
	 > logger="UnhandledError"
	E1001 23:51:21.201238       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.30.3:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.30.3:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.30.3:443: i/o timeout" logger="UnhandledError"
	I1001 23:51:21.230044       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 23:51:21.242498       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1002 00:00:03.704620       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.98.244"}
	E1002 00:00:56.397568       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 00:01:00.368253       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 00:01:28.828093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.828153       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.850230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.851623       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.865967       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.866100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.894955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.897120       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.919175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.919784       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 00:01:29.896531       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 00:01:29.921179       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 00:01:30.015798       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1002 00:01:42.549384       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1002 00:01:43.597110       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1002 00:01:48.160096       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 00:01:48.488270       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.118.197"}
	I1002 00:04:06.659635       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.94.76"}
	
	
	==> kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] <==
	E1002 00:02:20.553867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:02:43.048443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:02:43.048489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:02:45.846382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:02:45.846431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:02:49.309453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:02:49.309494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1002 00:03:03.758657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-902832"
	W1002 00:03:04.163798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:03:04.163841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:03:30.091059       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:03:30.091109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:03:32.125164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:03:32.125206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:03:34.976299       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:03:34.976341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:03:44.229949       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:03:44.230072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:04:01.105412       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:04:01.105453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1002 00:04:06.432199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.155048ms"
	I1002 00:04:06.493074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="60.748961ms"
	I1002 00:04:06.493418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.802µs"
	I1002 00:04:08.307561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.241539ms"
	I1002 00:04:08.307853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.009µs"
	
	
	==> kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] <==
	I1001 23:49:05.891440       1 server_linux.go:66] "Using iptables proxy"
	I1001 23:49:07.001537       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 23:49:07.011298       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:49:07.351838       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 23:49:07.352007       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:49:07.379845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:49:07.380496       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:49:07.380564       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:49:07.436072       1 config.go:328] "Starting node config controller"
	I1001 23:49:07.436108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:49:07.437155       1 config.go:199] "Starting service config controller"
	I1001 23:49:07.437178       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:49:07.437389       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:49:07.437404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:49:07.540120       1 shared_informer.go:320] Caches are synced for node config
	I1001 23:49:07.540255       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:49:07.541172       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] <==
	W1001 23:48:54.169836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:48:54.169876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.169925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 23:48:54.172303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.171353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:48:54.172469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.171407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 23:48:54.172561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.174808       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:48:54.174841       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:48:54.982088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:54.982133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.996869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:54.996987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.022987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 23:48:55.023157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.025759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:48:55.025942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.204403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:55.204525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.274522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:55.274660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.463513       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:48:55.463637       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 23:48:57.147229       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:02:39 addons-902832 kubelet[1485]: I1002 00:02:39.399240    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 00:02:43 addons-902832 kubelet[1485]: I1002 00:02:43.109122    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 00:02:43 addons-902832 kubelet[1485]: I1002 00:02:43.119211    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=54.439536454 podStartE2EDuration="55.119163539s" podCreationTimestamp="2024-10-02 00:01:48 +0000 UTC" firstStartedPulling="2024-10-02 00:01:48.757122486 +0000 UTC m=+772.457714737" lastFinishedPulling="2024-10-02 00:01:49.436749571 +0000 UTC m=+773.137341822" observedRunningTime="2024-10-02 00:01:49.99797135 +0000 UTC m=+773.698563601" watchObservedRunningTime="2024-10-02 00:02:43.119163539 +0000 UTC m=+826.819755790"
	Oct 02 00:02:46 addons-902832 kubelet[1485]: E1002 00:02:46.819816    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827366819585153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:02:46 addons-902832 kubelet[1485]: E1002 00:02:46.819856    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827366819585153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:02:56 addons-902832 kubelet[1485]: E1002 00:02:56.822461    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827376822217062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:02:56 addons-902832 kubelet[1485]: E1002 00:02:56.822500    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827376822217062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:06 addons-902832 kubelet[1485]: E1002 00:03:06.825552    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827386825309559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:06 addons-902832 kubelet[1485]: E1002 00:03:06.825592    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827386825309559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:16 addons-902832 kubelet[1485]: E1002 00:03:16.828654    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827396828398298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:16 addons-902832 kubelet[1485]: E1002 00:03:16.828700    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827396828398298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:26 addons-902832 kubelet[1485]: E1002 00:03:26.831597    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827406831372983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:26 addons-902832 kubelet[1485]: E1002 00:03:26.831640    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827406831372983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:36 addons-902832 kubelet[1485]: E1002 00:03:36.834693    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827416834465350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:36 addons-902832 kubelet[1485]: E1002 00:03:36.834728    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827416834465350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:46 addons-902832 kubelet[1485]: E1002 00:03:46.837234    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827426837009693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:46 addons-902832 kubelet[1485]: E1002 00:03:46.837276    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827426837009693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:56 addons-902832 kubelet[1485]: E1002 00:03:56.483152    1485 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664, memory: /docker/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/system.slice/kubelet.service"
	Oct 02 00:03:56 addons-902832 kubelet[1485]: E1002 00:03:56.840394    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827436840162661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:03:56 addons-902832 kubelet[1485]: E1002 00:03:56.840437    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827436840162661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:04:04 addons-902832 kubelet[1485]: I1002 00:04:04.399992    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 00:04:06 addons-902832 kubelet[1485]: I1002 00:04:06.424880    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=84.86491316 podStartE2EDuration="12m16.424840539s" podCreationTimestamp="2024-10-01 23:51:50 +0000 UTC" firstStartedPulling="2024-10-01 23:51:51.323606357 +0000 UTC m=+175.024198608" lastFinishedPulling="2024-10-02 00:02:42.883533736 +0000 UTC m=+826.584125987" observedRunningTime="2024-10-02 00:02:43.121308973 +0000 UTC m=+826.821901224" watchObservedRunningTime="2024-10-02 00:04:06.424840539 +0000 UTC m=+910.125432798"
	Oct 02 00:04:06 addons-902832 kubelet[1485]: I1002 00:04:06.567957    1485 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pk7f\" (UniqueName: \"kubernetes.io/projected/3423aef7-6acf-4582-8f38-7b45a79b8a28-kube-api-access-4pk7f\") pod \"hello-world-app-55bf9c44b4-27hwm\" (UID: \"3423aef7-6acf-4582-8f38-7b45a79b8a28\") " pod="default/hello-world-app-55bf9c44b4-27hwm"
	Oct 02 00:04:06 addons-902832 kubelet[1485]: E1002 00:04:06.843483    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827446843136038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:04:06 addons-902832 kubelet[1485]: E1002 00:04:06.843520    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827446843136038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586376,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3364809d715c943bf5cba98a2de1982916305c3e5460d68ea5c787d3a04bf1c3] <==
	I1001 23:49:46.513428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:49:46.527896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:49:46.528012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:49:46.536238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:49:46.536492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a!
	I1001 23:49:46.536616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fab6e1ea-fdd0-48bb-a53a-d4b2719a951f", APIVersion:"v1", ResourceVersion:"874", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a became leader
	I1001 23:49:46.636900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-902832 -n addons-902832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-902832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zbclz ingress-nginx-admission-patch-vnf2m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-902832 describe pod ingress-nginx-admission-create-zbclz ingress-nginx-admission-patch-vnf2m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-902832 describe pod ingress-nginx-admission-create-zbclz ingress-nginx-admission-patch-vnf2m: exit status 1 (80.732581ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zbclz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vnf2m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-902832 describe pod ingress-nginx-admission-create-zbclz ingress-nginx-admission-patch-vnf2m: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable ingress-dns --alsologtostderr -v=1: (1.745144218s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable ingress --alsologtostderr -v=1: (7.739673417s)
--- FAIL: TestAddons/parallel/Ingress (151.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (338.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.54815ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-78xch" [9a1268e4-5691-4653-93b1-c7a18c5734b5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.023213747s
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (152.604632ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 12m29.271219443s

                                                
                                                
** /stderr **
I1002 00:01:30.276760 1468453 retry.go:31] will retry after 2.750005227s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (92.109534ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 12m32.116421517s

                                                
                                                
** /stderr **
I1002 00:01:33.119435 1468453 retry.go:31] will retry after 2.782260462s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (127.208827ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 12m35.022747838s

                                                
                                                
** /stderr **
I1002 00:01:36.029737 1468453 retry.go:31] will retry after 7.243349322s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (108.764683ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 12m42.37922282s

                                                
                                                
** /stderr **
I1002 00:01:43.382178 1468453 retry.go:31] will retry after 12.102962718s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (125.303561ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 12m54.607702874s

                                                
                                                
** /stderr **
I1002 00:01:55.610801 1468453 retry.go:31] will retry after 10.648076193s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (84.607226ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 13m5.340830717s

                                                
                                                
** /stderr **
I1002 00:02:06.344292 1468453 retry.go:31] will retry after 18.467247637s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (87.771606ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 13m23.896598913s

                                                
                                                
** /stderr **
I1002 00:02:24.900262 1468453 retry.go:31] will retry after 48.975251967s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (82.031436ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 14m12.955848913s

                                                
                                                
** /stderr **
I1002 00:03:13.958557 1468453 retry.go:31] will retry after 1m14.998872968s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (89.057921ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 15m28.049663576s

                                                
                                                
** /stderr **
I1002 00:04:29.052486 1468453 retry.go:31] will retry after 1m6.965234849s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (90.065578ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 16m35.105169998s

                                                
                                                
** /stderr **
I1002 00:05:36.108184 1468453 retry.go:31] will retry after 1m23.491941933s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-902832 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-902832 top pods -n kube-system: exit status 1 (87.690637ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xljjm, age: 17m58.691245746s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-902832
helpers_test.go:235: (dbg) docker inspect addons-902832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664",
	        "Created": "2024-10-01T23:48:33.177211615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1469706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T23:48:33.306193379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/hostname",
	        "HostsPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/hosts",
	        "LogPath": "/var/lib/docker/containers/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664/7624d238c4e1e733c03e23211740a8a195a5a89f697d5f2d22503bb683d08664-json.log",
	        "Name": "/addons-902832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-902832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-902832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb-init/diff:/var/lib/docker/overlay2/a3930beaaef2dcba1a61f406e1fdc853ce637c87ef61fa93a286e9e50993b951/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ae505a6eedd604f944a4460652cbaec9dd0c83d912166e9fe359a09a3211aeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-902832",
	                "Source": "/var/lib/docker/volumes/addons-902832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-902832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-902832",
	                "name.minikube.sigs.k8s.io": "addons-902832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8042e82f667c42dcf5dc036da3e36737da63298a2ba0bbda92fdd57e5051eb88",
	            "SandboxKey": "/var/run/docker/netns/8042e82f667c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34296"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34297"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-902832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "51635cd5ac36d0dc534d71775aefdac2f936c0b4261dead30f5dc6b0bafee43e",
	                    "EndpointID": "74eba1b9edd80348539063b917b90f54ddf72306bc662f0e484a3002e5b81402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-902832",
	                        "7624d238c4e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-902832 -n addons-902832
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 logs -n 25: (1.474698108s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-549806 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | download-docker-549806                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-549806                                                                   | download-docker-549806 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-904477   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | binary-mirror-904477                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33775                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-904477                                                                     | binary-mirror-904477   | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | addons-902832                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | addons-902832                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-902832 --wait=true                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:51 UTC | 01 Oct 24 23:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 01 Oct 24 23:59 UTC | 02 Oct 24 00:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | -p addons-902832                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-902832 ip                                                                            | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | -p addons-902832                                                                            |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-902832 ssh cat                                                                       | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	|         | /opt/local-path-provisioner/pvc-cf99ba77-1628-40e8-9e38-1970b272e06c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-902832 addons                                                                        | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC | 02 Oct 24 00:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-902832 ssh curl -s                                                                   | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-902832 ip                                                                            | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-902832 addons disable                                                                | addons-902832          | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:48:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:48:09.068891 1469207 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:48:09.069027 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:09.069038 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:48:09.069043 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:09.069264 1469207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1001 23:48:09.069685 1469207 out.go:352] Setting JSON to false
	I1001 23:48:09.070706 1469207 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19829,"bootTime":1727806660,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1001 23:48:09.070797 1469207 start.go:139] virtualization:  
	I1001 23:48:09.073822 1469207 out.go:177] * [addons-902832] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:48:09.075733 1469207 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:48:09.075775 1469207 notify.go:220] Checking for updates...
	I1001 23:48:09.078742 1469207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:48:09.079918 1469207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:48:09.081101 1469207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1001 23:48:09.082361 1469207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 23:48:09.083848 1469207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:48:09.085283 1469207 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:48:09.105988 1469207 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:48:09.106128 1469207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:09.157252 1469207 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:48:09.147046206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:09.157429 1469207 docker.go:318] overlay module found
	I1001 23:48:09.160140 1469207 out.go:177] * Using the docker driver based on user configuration
	I1001 23:48:09.161776 1469207 start.go:297] selected driver: docker
	I1001 23:48:09.161802 1469207 start.go:901] validating driver "docker" against <nil>
	I1001 23:48:09.161828 1469207 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:48:09.162495 1469207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:09.211620 1469207 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:48:09.202421967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:09.211832 1469207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:48:09.212072 1469207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:48:09.214105 1469207 out.go:177] * Using Docker driver with root privileges
	I1001 23:48:09.215756 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:09.215823 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:09.215835 1469207 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:48:09.215906 1469207 start.go:340] cluster config:
	{Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:48:09.217609 1469207 out.go:177] * Starting "addons-902832" primary control-plane node in "addons-902832" cluster
	I1001 23:48:09.218971 1469207 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 23:48:09.220675 1469207 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 23:48:09.222570 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:09.222625 1469207 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1001 23:48:09.222638 1469207 cache.go:56] Caching tarball of preloaded images
	I1001 23:48:09.222668 1469207 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:48:09.222721 1469207 preload.go:172] Found /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1001 23:48:09.222731 1469207 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:48:09.223083 1469207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json ...
	I1001 23:48:09.223142 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json: {Name:mkf0c7c65aa397d04b9c786920da3f0162eb288c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:09.237560 1469207 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:48:09.237696 1469207 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:48:09.237738 1469207 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 23:48:09.237744 1469207 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 23:48:09.237752 1469207 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 23:48:09.237757 1469207 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 23:48:26.096603 1469207 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 23:48:26.096644 1469207 cache.go:194] Successfully downloaded all kic artifacts
	I1001 23:48:26.096685 1469207 start.go:360] acquireMachinesLock for addons-902832: {Name:mk9b70b1d6aef24ed741e07d772b84dae38e28fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:48:26.097184 1469207 start.go:364] duration metric: took 473.162µs to acquireMachinesLock for "addons-902832"
	I1001 23:48:26.097221 1469207 start.go:93] Provisioning new machine with config: &{Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:48:26.097328 1469207 start.go:125] createHost starting for "" (driver="docker")
	I1001 23:48:26.098943 1469207 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 23:48:26.099205 1469207 start.go:159] libmachine.API.Create for "addons-902832" (driver="docker")
	I1001 23:48:26.099245 1469207 client.go:168] LocalClient.Create starting
	I1001 23:48:26.099355 1469207 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem
	I1001 23:48:26.491466 1469207 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem
	I1001 23:48:27.046012 1469207 cli_runner.go:164] Run: docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 23:48:27.059629 1469207 cli_runner.go:211] docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 23:48:27.059737 1469207 network_create.go:284] running [docker network inspect addons-902832] to gather additional debugging logs...
	I1001 23:48:27.059763 1469207 cli_runner.go:164] Run: docker network inspect addons-902832
	W1001 23:48:27.075017 1469207 cli_runner.go:211] docker network inspect addons-902832 returned with exit code 1
	I1001 23:48:27.075050 1469207 network_create.go:287] error running [docker network inspect addons-902832]: docker network inspect addons-902832: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-902832 not found
	I1001 23:48:27.075066 1469207 network_create.go:289] output of [docker network inspect addons-902832]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-902832 not found
	
	** /stderr **
	I1001 23:48:27.075169 1469207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:48:27.099547 1469207 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017f34e0}
	I1001 23:48:27.099592 1469207 network_create.go:124] attempt to create docker network addons-902832 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 23:48:27.099651 1469207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-902832 addons-902832
	I1001 23:48:27.168229 1469207 network_create.go:108] docker network addons-902832 192.168.49.0/24 created
	I1001 23:48:27.168261 1469207 kic.go:121] calculated static IP "192.168.49.2" for the "addons-902832" container
	I1001 23:48:27.168340 1469207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 23:48:27.183509 1469207 cli_runner.go:164] Run: docker volume create addons-902832 --label name.minikube.sigs.k8s.io=addons-902832 --label created_by.minikube.sigs.k8s.io=true
	I1001 23:48:27.199863 1469207 oci.go:103] Successfully created a docker volume addons-902832
	I1001 23:48:27.199960 1469207 cli_runner.go:164] Run: docker run --rm --name addons-902832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --entrypoint /usr/bin/test -v addons-902832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 23:48:29.066502 1469207 cli_runner.go:217] Completed: docker run --rm --name addons-902832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --entrypoint /usr/bin/test -v addons-902832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.866477857s)
	I1001 23:48:29.066532 1469207 oci.go:107] Successfully prepared a docker volume addons-902832
	I1001 23:48:29.066558 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:29.066579 1469207 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 23:48:29.066647 1469207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-902832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 23:48:33.108442 1469207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-902832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.04175358s)
	I1001 23:48:33.108497 1469207 kic.go:203] duration metric: took 4.041915307s to extract preloaded images to volume ...
	W1001 23:48:33.108647 1469207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 23:48:33.108780 1469207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 23:48:33.163355 1469207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-902832 --name addons-902832 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-902832 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-902832 --network addons-902832 --ip 192.168.49.2 --volume addons-902832:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 23:48:33.453713 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Running}}
	I1001 23:48:33.476999 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.500958 1469207 cli_runner.go:164] Run: docker exec addons-902832 stat /var/lib/dpkg/alternatives/iptables
	I1001 23:48:33.558736 1469207 oci.go:144] the created container "addons-902832" has a running status.
	I1001 23:48:33.558762 1469207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa...
	I1001 23:48:33.746150 1469207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 23:48:33.769726 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.798242 1469207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 23:48:33.798260 1469207 kic_runner.go:114] Args: [docker exec --privileged addons-902832 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 23:48:33.861110 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:48:33.887303 1469207 machine.go:93] provisionDockerMachine start ...
	I1001 23:48:33.887396 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:33.914677 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:33.914942 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:33.914952 1469207 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:48:33.916202 1469207 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1001 23:48:37.054819 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-902832
	
	I1001 23:48:37.054907 1469207 ubuntu.go:169] provisioning hostname "addons-902832"
	I1001 23:48:37.055020 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.072218 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.072467 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.072486 1469207 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-902832 && echo "addons-902832" | sudo tee /etc/hostname
	I1001 23:48:37.219276 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-902832
	
	I1001 23:48:37.219366 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.237234 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.237473 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.237500 1469207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-902832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-902832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-902832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:48:37.370975 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:48:37.371002 1469207 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-1463060/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-1463060/.minikube}
	I1001 23:48:37.371030 1469207 ubuntu.go:177] setting up certificates
	I1001 23:48:37.371041 1469207 provision.go:84] configureAuth start
	I1001 23:48:37.371100 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:37.387340 1469207 provision.go:143] copyHostCerts
	I1001 23:48:37.387416 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/cert.pem (1123 bytes)
	I1001 23:48:37.387552 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/key.pem (1679 bytes)
	I1001 23:48:37.387624 1469207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.pem (1082 bytes)
	I1001 23:48:37.387673 1469207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem org=jenkins.addons-902832 san=[127.0.0.1 192.168.49.2 addons-902832 localhost minikube]
	I1001 23:48:37.734071 1469207 provision.go:177] copyRemoteCerts
	I1001 23:48:37.734170 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:48:37.734216 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.750698 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:37.847747 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:48:37.871254 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 23:48:37.895006 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:48:37.917817 1469207 provision.go:87] duration metric: took 546.753417ms to configureAuth
	I1001 23:48:37.917842 1469207 ubuntu.go:193] setting minikube options for container-runtime
	I1001 23:48:37.918029 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:48:37.918141 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:37.936641 1469207 main.go:141] libmachine: Using SSH client type: native
	I1001 23:48:37.936888 1469207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1001 23:48:37.936907 1469207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:48:38.174604 1469207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:48:38.174624 1469207 machine.go:96] duration metric: took 4.28730214s to provisionDockerMachine
	I1001 23:48:38.174634 1469207 client.go:171] duration metric: took 12.075381145s to LocalClient.Create
	I1001 23:48:38.174648 1469207 start.go:167] duration metric: took 12.075444003s to libmachine.API.Create "addons-902832"
	I1001 23:48:38.174655 1469207 start.go:293] postStartSetup for "addons-902832" (driver="docker")
	I1001 23:48:38.174665 1469207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:48:38.174725 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:48:38.174770 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.192010 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.288955 1469207 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:48:38.291799 1469207 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 23:48:38.291835 1469207 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 23:48:38.291847 1469207 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 23:48:38.291854 1469207 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 23:48:38.291868 1469207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1463060/.minikube/addons for local assets ...
	I1001 23:48:38.291940 1469207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1463060/.minikube/files for local assets ...
	I1001 23:48:38.291972 1469207 start.go:296] duration metric: took 117.311194ms for postStartSetup
	I1001 23:48:38.292286 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:38.308759 1469207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/config.json ...
	I1001 23:48:38.309036 1469207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:48:38.309099 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.324673 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.415951 1469207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 23:48:38.420215 1469207 start.go:128] duration metric: took 12.322869972s to createHost
	I1001 23:48:38.420288 1469207 start.go:83] releasing machines lock for "addons-902832", held for 12.323086148s
	I1001 23:48:38.420391 1469207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-902832
	I1001 23:48:38.436243 1469207 ssh_runner.go:195] Run: cat /version.json
	I1001 23:48:38.436293 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.436309 1469207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:48:38.436383 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:48:38.454482 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.461082 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:48:38.681283 1469207 ssh_runner.go:195] Run: systemctl --version
	I1001 23:48:38.685621 1469207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:48:38.829234 1469207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 23:48:38.833407 1469207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:48:38.854114 1469207 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 23:48:38.854233 1469207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:48:38.883936 1469207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 23:48:38.883962 1469207 start.go:495] detecting cgroup driver to use...
	I1001 23:48:38.883995 1469207 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 23:48:38.884047 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:48:38.900057 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:48:38.911936 1469207 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:48:38.912005 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:48:38.925324 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:48:38.939805 1469207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:48:39.028416 1469207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:48:39.126944 1469207 docker.go:233] disabling docker service ...
	I1001 23:48:39.127055 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:48:39.149248 1469207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:48:39.161253 1469207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:48:39.263791 1469207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:48:39.363479 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:48:39.374836 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:48:39.391099 1469207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:48:39.391242 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.401063 1469207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:48:39.401179 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.411053 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.421638 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.431707 1469207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:48:39.441198 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.451541 1469207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.467518 1469207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:48:39.477676 1469207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:48:39.487379 1469207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:48:39.495644 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:48:39.578225 1469207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:48:39.690484 1469207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:48:39.690627 1469207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:48:39.694185 1469207 start.go:563] Will wait 60s for crictl version
	I1001 23:48:39.694246 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:48:39.697533 1469207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:48:39.734732 1469207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 23:48:39.734848 1469207 ssh_runner.go:195] Run: crio --version
	I1001 23:48:39.771965 1469207 ssh_runner.go:195] Run: crio --version
	I1001 23:48:39.809513 1469207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1001 23:48:39.810643 1469207 cli_runner.go:164] Run: docker network inspect addons-902832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:48:39.824289 1469207 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 23:48:39.827781 1469207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:48:39.838374 1469207 kubeadm.go:883] updating cluster {Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:48:39.838496 1469207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:39.838558 1469207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:48:39.908781 1469207 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:48:39.908803 1469207 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:48:39.908859 1469207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:48:39.947028 1469207 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:48:39.947051 1469207 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:48:39.947060 1469207 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1001 23:48:39.947162 1469207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-902832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:48:39.947258 1469207 ssh_runner.go:195] Run: crio config
	I1001 23:48:39.992161 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:39.992185 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:39.992195 1469207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:48:39.992217 1469207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-902832 NodeName:addons-902832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:48:39.992364 1469207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-902832"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:48:39.992436 1469207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:48:40.001226 1469207 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:48:40.001317 1469207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:48:40.018850 1469207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1001 23:48:40.039861 1469207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:48:40.059729 1469207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1001 23:48:40.078342 1469207 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 23:48:40.082159 1469207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:48:40.094632 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:48:40.179150 1469207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:48:40.193853 1469207 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832 for IP: 192.168.49.2
	I1001 23:48:40.193871 1469207 certs.go:194] generating shared ca certs ...
	I1001 23:48:40.193888 1469207 certs.go:226] acquiring lock for ca certs: {Name:mk3f5ff76a5b6681ba8f6985f72e49b1d01e9c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.194027 1469207 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key
	I1001 23:48:40.428355 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt ...
	I1001 23:48:40.428385 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt: {Name:mk482523cb013c30b3ab046472a810fd35f37123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.429026 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key ...
	I1001 23:48:40.429043 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key: {Name:mke64b9ef4d3d7b41b267e67131531c63f1dfe18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.429166 1469207 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key
	I1001 23:48:40.782587 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt ...
	I1001 23:48:40.782617 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt: {Name:mk51fbc20708189d025a430fb8ae145cb131ba4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.782801 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key ...
	I1001 23:48:40.782813 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key: {Name:mk427c0c8b72ae0caed528d2040b8d9247afdbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:40.782891 1469207 certs.go:256] generating profile certs ...
	I1001 23:48:40.782952 1469207 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key
	I1001 23:48:40.782968 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt with IP's: []
	I1001 23:48:41.212760 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt ...
	I1001 23:48:41.212796 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: {Name:mk9eb884be1ea85b9b4c9866fd707cae21a89748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:41.212995 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key ...
	I1001 23:48:41.213008 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.key: {Name:mk49ef10649d22c01fc6b3445c976763c2dd36cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:41.213555 1469207 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890
	I1001 23:48:41.213581 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 23:48:42.218783 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 ...
	I1001 23:48:42.218817 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890: {Name:mke5876db42c8dd84bc0fcc3061d4d6eeee90942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.219013 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890 ...
	I1001 23:48:42.219027 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890: {Name:mk27e5000269ab7faf0202e7ed91abf6b232c400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.219125 1469207 certs.go:381] copying /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt.303a9890 -> /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt
	I1001 23:48:42.219227 1469207 certs.go:385] copying /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key.303a9890 -> /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key
	I1001 23:48:42.219286 1469207 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key
	I1001 23:48:42.219309 1469207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt with IP's: []
	I1001 23:48:42.569725 1469207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt ...
	I1001 23:48:42.569757 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt: {Name:mk417b9830923bd6e8c521aad7faed88ddb7228d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.569948 1469207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key ...
	I1001 23:48:42.569962 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key: {Name:mk1dc85790736919b082a6218cc7cc5613fad41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:48:42.570152 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:48:42.570194 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/ca.pem (1082 bytes)
	I1001 23:48:42.570218 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:48:42.570250 1469207 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1463060/.minikube/certs/key.pem (1679 bytes)
	I1001 23:48:42.570838 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:48:42.596372 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 23:48:42.620146 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:48:42.643309 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:48:42.666497 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 23:48:42.690598 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:48:42.713920 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:48:42.737177 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:48:42.760433 1469207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:48:42.783806 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:48:42.801423 1469207 ssh_runner.go:195] Run: openssl version
	I1001 23:48:42.806694 1469207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:48:42.815879 1469207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.819067 1469207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.819128 1469207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:48:42.825696 1469207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:48:42.834651 1469207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:48:42.837867 1469207 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:48:42.837935 1469207 kubeadm.go:392] StartCluster: {Name:addons-902832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-902832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:48:42.838028 1469207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:48:42.838095 1469207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:48:42.874460 1469207 cri.go:89] found id: ""
	I1001 23:48:42.874535 1469207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:48:42.883341 1469207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:48:42.891876 1469207 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 23:48:42.892007 1469207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:48:42.900749 1469207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:48:42.900769 1469207 kubeadm.go:157] found existing configuration files:
	
	I1001 23:48:42.900847 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:48:42.909316 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:48:42.909409 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:48:42.917945 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:48:42.926922 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:48:42.926991 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:48:42.935402 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:48:42.943850 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:48:42.943914 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:48:42.952251 1469207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:48:42.961152 1469207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:48:42.961246 1469207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:48:42.969200 1469207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 23:48:43.013654 1469207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:48:43.013715 1469207 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:48:43.033727 1469207 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 23:48:43.033806 1469207 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1001 23:48:43.033845 1469207 kubeadm.go:310] OS: Linux
	I1001 23:48:43.033895 1469207 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 23:48:43.033946 1469207 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 23:48:43.033997 1469207 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 23:48:43.034051 1469207 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 23:48:43.034101 1469207 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 23:48:43.034159 1469207 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 23:48:43.034211 1469207 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 23:48:43.034268 1469207 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 23:48:43.034318 1469207 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 23:48:43.099436 1469207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:48:43.099566 1469207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:48:43.099678 1469207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:48:43.111574 1469207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:48:43.115690 1469207 out.go:235]   - Generating certificates and keys ...
	I1001 23:48:43.115900 1469207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:48:43.115985 1469207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:48:43.295288 1469207 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:48:44.120949 1469207 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:48:44.475930 1469207 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:48:44.911551 1469207 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:48:46.185018 1469207 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:48:46.185272 1469207 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-902832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:48:46.544115 1469207 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:48:46.544255 1469207 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-902832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:48:46.818696 1469207 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:48:47.136978 1469207 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:48:47.521407 1469207 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:48:47.521591 1469207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:48:47.913537 1469207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:48:48.164625 1469207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:48:48.391968 1469207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:48:48.710116 1469207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:48:49.047213 1469207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:48:49.047956 1469207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:48:49.053436 1469207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:48:49.055086 1469207 out.go:235]   - Booting up control plane ...
	I1001 23:48:49.055217 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:48:49.055296 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:48:49.056383 1469207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:48:49.066757 1469207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:48:49.072732 1469207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:48:49.072794 1469207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:48:49.166277 1469207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:48:49.166418 1469207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:48:50.167664 1469207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001500285s
	I1001 23:48:50.167760 1469207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:48:55.669893 1469207 kubeadm.go:310] [api-check] The API server is healthy after 5.50221357s
	I1001 23:48:55.689502 1469207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:48:55.702929 1469207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:48:55.724852 1469207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:48:55.725111 1469207 kubeadm.go:310] [mark-control-plane] Marking the node addons-902832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:48:55.735753 1469207 kubeadm.go:310] [bootstrap-token] Using token: np6l28.nq98jby1xj6o1njh
	I1001 23:48:55.737057 1469207 out.go:235]   - Configuring RBAC rules ...
	I1001 23:48:55.737179 1469207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:48:55.744238 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:48:55.750926 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:48:55.754136 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:48:55.757140 1469207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:48:55.762345 1469207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:48:56.078202 1469207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:48:56.504886 1469207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:48:57.077291 1469207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:48:57.078423 1469207 kubeadm.go:310] 
	I1001 23:48:57.078494 1469207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:48:57.078500 1469207 kubeadm.go:310] 
	I1001 23:48:57.078576 1469207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:48:57.078581 1469207 kubeadm.go:310] 
	I1001 23:48:57.078608 1469207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:48:57.078667 1469207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:48:57.078716 1469207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:48:57.078720 1469207 kubeadm.go:310] 
	I1001 23:48:57.078773 1469207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:48:57.078778 1469207 kubeadm.go:310] 
	I1001 23:48:57.078824 1469207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:48:57.078828 1469207 kubeadm.go:310] 
	I1001 23:48:57.078880 1469207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:48:57.078953 1469207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:48:57.079020 1469207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:48:57.079028 1469207 kubeadm.go:310] 
	I1001 23:48:57.079111 1469207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:48:57.079205 1469207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:48:57.079211 1469207 kubeadm.go:310] 
	I1001 23:48:57.079293 1469207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token np6l28.nq98jby1xj6o1njh \
	I1001 23:48:57.079394 1469207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5208ef0b8fca8d57e76f0c6fa712e05fed0b080e4466dd6159bacdcc4fe52560 \
	I1001 23:48:57.079415 1469207 kubeadm.go:310] 	--control-plane 
	I1001 23:48:57.079419 1469207 kubeadm.go:310] 
	I1001 23:48:57.079502 1469207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:48:57.079507 1469207 kubeadm.go:310] 
	I1001 23:48:57.079587 1469207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token np6l28.nq98jby1xj6o1njh \
	I1001 23:48:57.079687 1469207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5208ef0b8fca8d57e76f0c6fa712e05fed0b080e4466dd6159bacdcc4fe52560 
	I1001 23:48:57.082243 1469207 kubeadm.go:310] W1001 23:48:43.010228    1184 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:48:57.082537 1469207 kubeadm.go:310] W1001 23:48:43.011176    1184 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:48:57.082755 1469207 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1001 23:48:57.082866 1469207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:48:57.082884 1469207 cni.go:84] Creating CNI manager for ""
	I1001 23:48:57.082895 1469207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:57.084703 1469207 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:48:57.085971 1469207 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:48:57.090354 1469207 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:48:57.090373 1469207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:48:57.110517 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:48:57.388161 1469207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:48:57.388287 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:57.388362 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-902832 minikube.k8s.io/updated_at=2024_10_01T23_48_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-902832 minikube.k8s.io/primary=true
	I1001 23:48:57.533804 1469207 ops.go:34] apiserver oom_adj: -16
	I1001 23:48:57.533969 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:58.034673 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:58.534540 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:59.034842 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:48:59.534615 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:00.034213 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:00.534665 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.034662 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.534019 1469207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:49:01.636955 1469207 kubeadm.go:1113] duration metric: took 4.248713634s to wait for elevateKubeSystemPrivileges
	I1001 23:49:01.636988 1469207 kubeadm.go:394] duration metric: took 18.799058587s to StartCluster
	I1001 23:49:01.637007 1469207 settings.go:142] acquiring lock: {Name:mk9069fc4941965284bfe98880a9f5d91bac598f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:49:01.637144 1469207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:49:01.637619 1469207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1463060/kubeconfig: {Name:mk74b9b3ba7b209d36f296358939f489e2673d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:49:01.638265 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:49:01.638291 1469207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:49:01.638556 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:49:01.638598 1469207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 23:49:01.638684 1469207 addons.go:69] Setting yakd=true in profile "addons-902832"
	I1001 23:49:01.638699 1469207 addons.go:234] Setting addon yakd=true in "addons-902832"
	I1001 23:49:01.638726 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.639238 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.639579 1469207 addons.go:69] Setting metrics-server=true in profile "addons-902832"
	I1001 23:49:01.639600 1469207 addons.go:234] Setting addon metrics-server=true in "addons-902832"
	I1001 23:49:01.639634 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.640079 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.641636 1469207 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-902832"
	I1001 23:49:01.643464 1469207 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-902832"
	I1001 23:49:01.643572 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.641821 1469207 addons.go:69] Setting registry=true in profile "addons-902832"
	I1001 23:49:01.641839 1469207 addons.go:69] Setting storage-provisioner=true in profile "addons-902832"
	I1001 23:49:01.641847 1469207 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-902832"
	I1001 23:49:01.641855 1469207 addons.go:69] Setting volcano=true in profile "addons-902832"
	I1001 23:49:01.641861 1469207 addons.go:69] Setting volumesnapshots=true in profile "addons-902832"
	I1001 23:49:01.641910 1469207 out.go:177] * Verifying Kubernetes components...
	I1001 23:49:01.642151 1469207 addons.go:69] Setting ingress=true in profile "addons-902832"
	I1001 23:49:01.642158 1469207 addons.go:69] Setting cloud-spanner=true in profile "addons-902832"
	I1001 23:49:01.642164 1469207 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-902832"
	I1001 23:49:01.642169 1469207 addons.go:69] Setting default-storageclass=true in profile "addons-902832"
	I1001 23:49:01.642173 1469207 addons.go:69] Setting gcp-auth=true in profile "addons-902832"
	I1001 23:49:01.642182 1469207 addons.go:69] Setting inspektor-gadget=true in profile "addons-902832"
	I1001 23:49:01.642187 1469207 addons.go:69] Setting ingress-dns=true in profile "addons-902832"
	I1001 23:49:01.643855 1469207 addons.go:234] Setting addon ingress-dns=true in "addons-902832"
	I1001 23:49:01.643912 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.644439 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.647789 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.659716 1469207 addons.go:234] Setting addon ingress=true in "addons-902832"
	I1001 23:49:01.659785 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.660268 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.663972 1469207 addons.go:234] Setting addon registry=true in "addons-902832"
	I1001 23:49:01.664089 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.664605 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670085 1469207 addons.go:234] Setting addon cloud-spanner=true in "addons-902832"
	I1001 23:49:01.670143 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.670146 1469207 addons.go:234] Setting addon storage-provisioner=true in "addons-902832"
	I1001 23:49:01.670185 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.670634 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670647 1469207 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-902832"
	I1001 23:49:01.670881 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.694655 1469207 addons.go:234] Setting addon volcano=true in "addons-902832"
	I1001 23:49:01.694719 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.695228 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.670635 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.711323 1469207 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-902832"
	I1001 23:49:01.711423 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.711964 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.717344 1469207 addons.go:234] Setting addon volumesnapshots=true in "addons-902832"
	I1001 23:49:01.717401 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.718007 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.737762 1469207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-902832"
	I1001 23:49:01.737833 1469207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:49:01.738127 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.753155 1469207 mustload.go:65] Loading cluster: addons-902832
	I1001 23:49:01.753390 1469207 config.go:182] Loaded profile config "addons-902832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:49:01.753690 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.779087 1469207 addons.go:234] Setting addon inspektor-gadget=true in "addons-902832"
	I1001 23:49:01.779140 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.779879 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.811020 1469207 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 23:49:01.822646 1469207 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 23:49:01.829645 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 23:49:01.829812 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 23:49:01.847937 1469207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 23:49:01.848205 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.829846 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 23:49:01.829971 1469207 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 23:49:01.830086 1469207 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 23:49:01.850062 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 23:49:01.850132 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.830091 1469207 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 23:49:01.854133 1469207 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:49:01.854187 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 23:49:01.854284 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.862580 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 23:49:01.862603 1469207 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 23:49:01.862669 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.870271 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:01.870559 1469207 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:49:01.870581 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 23:49:01.870649 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	W1001 23:49:01.883475 1469207 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 23:49:01.925336 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 23:49:01.926158 1469207 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 23:49:01.926715 1469207 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-902832"
	I1001 23:49:01.927381 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.927927 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.966506 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 23:49:01.967450 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:01.969160 1469207 addons.go:234] Setting addon default-storageclass=true in "addons-902832"
	I1001 23:49:01.969253 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.975681 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 23:49:01.977149 1469207 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 23:49:01.975856 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 23:49:01.975969 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:01.976093 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:01.977341 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.012100 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 23:49:02.019584 1469207 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:49:02.019615 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 23:49:02.019693 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:01.977349 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 23:49:02.020920 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.035536 1469207 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 23:49:02.036014 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:49:02.036170 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:49:02.036360 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 23:49:02.043505 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 23:49:02.043536 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 23:49:02.043619 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.043891 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.063925 1469207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:49:02.064029 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:49:02.064531 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.091558 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 23:49:02.101519 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 23:49:02.104105 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.104809 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.126886 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.127989 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.136066 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 23:49:02.139280 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 23:49:02.143543 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 23:49:02.145962 1469207 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 23:49:02.148999 1469207 out.go:177]   - Using image docker.io/busybox:stable
	I1001 23:49:02.149117 1469207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 23:49:02.155617 1469207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:49:02.155642 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 23:49:02.155708 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.175063 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 23:49:02.175128 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 23:49:02.175275 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.179270 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.181671 1469207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:49:02.181686 1469207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:49:02.181751 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:02.202524 1469207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:49:02.224584 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.225278 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.226876 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.273486 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.273855 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.276920 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.286586 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:02.416888 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:49:02.548515 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 23:49:02.548578 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 23:49:02.610722 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 23:49:02.610794 1469207 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 23:49:02.613587 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 23:49:02.641055 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 23:49:02.641129 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 23:49:02.653277 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:49:02.690331 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:49:02.691587 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 23:49:02.691648 1469207 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 23:49:02.706075 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:49:02.714160 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 23:49:02.714235 1469207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 23:49:02.723319 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:49:02.733696 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:49:02.772383 1469207 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:49:02.772454 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 23:49:02.782365 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 23:49:02.782442 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 23:49:02.787205 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 23:49:02.787276 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 23:49:02.805774 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 23:49:02.805847 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 23:49:02.830174 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 23:49:02.830249 1469207 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 23:49:02.881777 1469207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:49:02.881848 1469207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 23:49:02.907796 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 23:49:02.907877 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 23:49:02.913171 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:49:02.956583 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 23:49:02.956668 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 23:49:03.006805 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 23:49:03.006896 1469207 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 23:49:03.014541 1469207 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 23:49:03.014569 1469207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 23:49:03.069505 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:49:03.071805 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 23:49:03.071876 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 23:49:03.120594 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 23:49:03.120621 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 23:49:03.155118 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 23:49:03.155144 1469207 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 23:49:03.207882 1469207 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:49:03.207911 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 23:49:03.209277 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 23:49:03.209299 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 23:49:03.227326 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 23:49:03.227352 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 23:49:03.285597 1469207 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:03.285622 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 23:49:03.315326 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:03.352797 1469207 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 23:49:03.352823 1469207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 23:49:03.352974 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:49:03.353048 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 23:49:03.353059 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 23:49:03.446709 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 23:49:03.446736 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 23:49:03.457385 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 23:49:03.457412 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 23:49:03.569743 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 23:49:03.569771 1469207 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 23:49:03.578767 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 23:49:03.578793 1469207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 23:49:03.666559 1469207 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:49:03.666629 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 23:49:03.711708 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:49:03.720600 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 23:49:03.720670 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 23:49:03.799016 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 23:49:03.799083 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 23:49:03.929375 1469207 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:49:03.929450 1469207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 23:49:04.089253 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:49:04.861879 1469207 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.825832416s)
	I1001 23:49:04.862022 1469207 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 23:49:04.861978 1469207 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.659432373s)
	I1001 23:49:04.862840 1469207 node_ready.go:35] waiting up to 6m0s for node "addons-902832" to be "Ready" ...
	I1001 23:49:04.864080 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.447165411s)
	I1001 23:49:05.620732 1469207 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-902832" context rescaled to 1 replicas
	I1001 23:49:06.094829 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.481166142s)
	I1001 23:49:06.271623 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.618265498s)
	I1001 23:49:06.716737 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.026325174s)
	I1001 23:49:06.716841 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.010699949s)
	I1001 23:49:06.857937 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.134540013s)
	I1001 23:49:06.878165 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:07.643991 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.910162839s)
	I1001 23:49:07.644024 1469207 addons.go:475] Verifying addon ingress=true in "addons-902832"
	I1001 23:49:07.644222 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.730982993s)
	I1001 23:49:07.644241 1469207 addons.go:475] Verifying addon registry=true in "addons-902832"
	I1001 23:49:07.644527 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.574981329s)
	I1001 23:49:07.644547 1469207 addons.go:475] Verifying addon metrics-server=true in "addons-902832"
	I1001 23:49:07.647792 1469207 out.go:177] * Verifying registry addon...
	I1001 23:49:07.647877 1469207 out.go:177] * Verifying ingress addon...
	I1001 23:49:07.652272 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 23:49:07.653193 1469207 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 23:49:07.694387 1469207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 23:49:07.694420 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:07.695299 1469207 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 23:49:07.695319 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:07.897759 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.582384486s)
	W1001 23:49:07.897799 1469207 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:49:07.897821 1469207 retry.go:31] will retry after 301.384921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:49:07.897868 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.544878141s)
	I1001 23:49:07.898089 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.18630257s)
	I1001 23:49:07.902086 1469207 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-902832 service yakd-dashboard -n yakd-dashboard
	
	I1001 23:49:08.136197 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.046841038s)
	I1001 23:49:08.136232 1469207 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-902832"
	I1001 23:49:08.139010 1469207 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 23:49:08.142520 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 23:49:08.154031 1469207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 23:49:08.154053 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:08.158668 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:08.182952 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:08.200360 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:49:08.646596 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:08.708735 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:08.746151 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.148366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:09.159990 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.168972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:09.367218 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:09.645982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:09.656164 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:09.657299 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:10.147692 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:10.160511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:10.162501 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:10.646713 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:10.657410 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:10.658023 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.147035 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:11.160614 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:11.162386 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.227609 1469207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.02720651s)
	I1001 23:49:11.647351 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:11.656257 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:11.658952 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:11.866128 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:11.888952 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 23:49:11.889043 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:11.906966 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:12.015475 1469207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 23:49:12.035460 1469207 addons.go:234] Setting addon gcp-auth=true in "addons-902832"
	I1001 23:49:12.035520 1469207 host.go:66] Checking if "addons-902832" exists ...
	I1001 23:49:12.035980 1469207 cli_runner.go:164] Run: docker container inspect addons-902832 --format={{.State.Status}}
	I1001 23:49:12.058360 1469207 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 23:49:12.058480 1469207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-902832
	I1001 23:49:12.076020 1469207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/addons-902832/id_rsa Username:docker}
	I1001 23:49:12.146147 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:12.157837 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:12.158973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:12.185770 1469207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:49:12.188464 1469207 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 23:49:12.191203 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 23:49:12.191248 1469207 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 23:49:12.222614 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 23:49:12.222648 1469207 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 23:49:12.241158 1469207 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:49:12.241187 1469207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 23:49:12.277784 1469207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:49:12.649079 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:12.657731 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:12.659207 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:12.904176 1469207 addons.go:475] Verifying addon gcp-auth=true in "addons-902832"
	I1001 23:49:12.907607 1469207 out.go:177] * Verifying gcp-auth addon...
	I1001 23:49:12.911050 1469207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 23:49:12.921096 1469207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 23:49:12.921161 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:13.147127 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:13.155648 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:13.157261 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:13.415586 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:13.646880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:13.656014 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:13.657309 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:13.866600 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:13.914763 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:14.146973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:14.156676 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:14.157447 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:14.414625 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:14.646544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:14.656367 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:14.658598 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:14.914570 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:15.146747 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:15.156746 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:15.157844 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:15.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:15.646270 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:15.655130 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:15.657923 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:15.914895 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:16.146284 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:16.155472 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:16.156900 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:16.366357 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:16.414430 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:16.645941 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:16.656747 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:16.657640 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:16.914785 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:17.146542 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:17.155587 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:17.157834 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:17.414664 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:17.646770 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:17.655530 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:17.658496 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:17.915101 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:18.146614 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:18.156272 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:18.157504 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:18.366482 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:18.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:18.646151 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:18.656855 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:18.657106 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:18.914669 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:19.146308 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:19.155752 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:19.158879 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:19.414366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:19.646284 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:19.655457 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:19.657524 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:19.914630 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:20.146301 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:20.156324 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:20.157658 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:20.414653 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:20.646207 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:20.655635 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:20.657216 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:20.866712 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:20.914712 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:21.146366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:21.156672 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:21.158051 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:21.414315 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:21.645915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:21.655526 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:21.657565 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:21.915049 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:22.146781 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:22.156097 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:22.157425 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:22.414025 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:22.646109 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:22.657787 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:22.658505 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:22.915354 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:23.146293 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:23.156565 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:23.157565 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:23.366783 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:23.414355 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:23.646379 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:23.656106 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:23.656849 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:23.915216 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:24.146521 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:24.156345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:24.157139 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:24.414498 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:24.646147 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:24.655907 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:24.658962 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:24.914167 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:25.147265 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:25.158447 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:25.159250 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:25.414120 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:25.647051 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:25.656668 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:25.657404 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:25.866822 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:25.914999 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:26.146124 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:26.155705 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:26.156966 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:26.416759 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:26.646237 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:26.655325 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:26.657701 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:26.914881 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:27.146321 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:27.159982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:27.161097 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:27.414497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:27.646624 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:27.656561 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:27.657647 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:27.916737 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:28.145750 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:28.155653 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:28.158422 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:28.366730 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:28.414610 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:28.647216 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:28.655306 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:28.657001 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:28.915205 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:29.146324 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:29.155211 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:29.156791 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:29.414684 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:29.646511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:29.655695 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:29.656900 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:29.914577 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:30.146918 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:30.156710 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:30.157936 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:30.414789 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:30.646180 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:30.655704 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:30.657191 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:30.866459 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:30.914952 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:31.146407 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:31.155123 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:31.157072 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:31.414445 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:31.646621 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:31.656707 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:31.657664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:31.915112 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:32.146860 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:32.156365 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:32.157168 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:32.415336 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:32.645805 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:32.656349 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:32.657097 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:32.866639 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:32.914992 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:33.146452 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:33.155532 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:33.156744 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:33.414225 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:33.646504 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:33.656135 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:33.656925 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:33.914452 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:34.146138 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:34.155839 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:34.157607 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:34.414652 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:34.646844 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:34.656138 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:34.656939 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:34.914511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:35.146816 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:35.156847 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:35.158093 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:35.366216 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:35.414381 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:35.647446 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:35.656271 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:35.656984 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:35.915237 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:36.145890 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:36.156547 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:36.157164 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:36.414880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:36.646938 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:36.656688 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:36.657258 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:36.914592 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:37.146477 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:37.156491 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:37.156991 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:37.366998 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:37.414885 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:37.645884 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:37.655298 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:37.656801 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:37.914330 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:38.145778 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:38.157237 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:38.157402 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:38.414620 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:38.646479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:38.656345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:38.656985 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:38.915023 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:39.146084 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:39.157044 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:39.157894 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:39.414642 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:39.647086 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:39.655346 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:39.657540 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:39.865921 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:39.915582 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:40.146612 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:40.155880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:40.157958 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:40.415224 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:40.647717 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:40.656483 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:40.657158 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:40.914741 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:41.146861 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:41.157235 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:41.158062 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:41.414658 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:41.646468 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:41.656630 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:41.657014 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:41.866158 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:41.914912 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:42.146622 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:42.156194 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:42.158595 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:42.414297 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:42.645861 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:42.656511 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:42.657362 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:42.915005 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:43.145885 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:43.157399 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:43.157562 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:43.414999 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:43.646092 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:43.656695 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:43.657856 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:43.866233 1469207 node_ready.go:53] node "addons-902832" has status "Ready":"False"
	I1001 23:49:43.914243 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:44.146337 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:44.155982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:44.157201 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:44.414181 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:44.646915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:44.655352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:44.657963 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:44.914010 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:45.147829 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:45.158222 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:45.158450 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:45.414544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:45.646846 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:45.655633 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:45.657389 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:45.879353 1469207 node_ready.go:49] node "addons-902832" has status "Ready":"True"
	I1001 23:49:45.879387 1469207 node_ready.go:38] duration metric: took 41.016524571s for node "addons-902832" to be "Ready" ...
	I1001 23:49:45.879397 1469207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:49:45.892293 1469207 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:45.949812 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:46.161823 1469207 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 23:49:46.161851 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:46.166737 1469207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 23:49:46.166764 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:46.167588 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:46.450375 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:46.647554 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:46.659238 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:46.660217 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:46.916226 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:47.152263 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:47.170016 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:47.171363 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:47.416345 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:47.647962 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:47.669055 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:47.750674 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:47.905341 1469207 pod_ready.go:103] pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:47.920837 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.151352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:48.248569 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:48.250009 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:48.415145 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.647845 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:48.657816 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:48.657991 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:48.901213 1469207 pod_ready.go:93] pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.901238 1469207 pod_ready.go:82] duration metric: took 3.008913715s for pod "coredns-7c65d6cfc9-xljjm" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.901257 1469207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.906292 1469207 pod_ready.go:93] pod "etcd-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.906315 1469207 pod_ready.go:82] duration metric: took 5.028534ms for pod "etcd-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.906330 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.911306 1469207 pod_ready.go:93] pod "kube-apiserver-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.911330 1469207 pod_ready.go:82] duration metric: took 4.992826ms for pod "kube-apiserver-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.911341 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.915591 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:48.917416 1469207 pod_ready.go:93] pod "kube-controller-manager-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.917437 1469207 pod_ready.go:82] duration metric: took 6.088957ms for pod "kube-controller-manager-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.917451 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kx8p9" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.924005 1469207 pod_ready.go:93] pod "kube-proxy-kx8p9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:48.924031 1469207 pod_ready.go:82] duration metric: took 6.57235ms for pod "kube-proxy-kx8p9" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:48.924042 1469207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.146988 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:49.155813 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:49.158741 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:49.296976 1469207 pod_ready.go:93] pod "kube-scheduler-addons-902832" in "kube-system" namespace has status "Ready":"True"
	I1001 23:49:49.297004 1469207 pod_ready.go:82] duration metric: took 372.952028ms for pod "kube-scheduler-addons-902832" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.297016 1469207 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace to be "Ready" ...
	I1001 23:49:49.414104 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:49.646968 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:49.656090 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:49.658164 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:49.914772 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:50.147897 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:50.157406 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:50.158783 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:50.415634 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:50.648544 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:50.658704 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:50.660769 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:50.914593 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:51.148394 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:51.157232 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:51.159726 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:51.304833 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:51.415652 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:51.653115 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:51.666618 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:51.667301 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:51.914854 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:52.148842 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:52.158821 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:52.159034 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:52.416115 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:52.662699 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:52.668163 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:52.669216 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:52.915065 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:53.150713 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:53.172951 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:53.175853 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:53.415574 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:53.647451 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:53.656454 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:53.658574 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:53.805896 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:53.915357 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:54.149539 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:54.159514 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:54.161060 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:54.414951 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:54.651017 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:54.665061 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:54.665703 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:54.914952 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:55.148832 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:55.158945 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:55.160572 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:55.415374 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:55.648326 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:55.658130 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:55.660947 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:55.809419 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:55.914767 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:56.147342 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:56.156920 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:56.158100 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:56.415063 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:56.648190 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:56.656377 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:56.658149 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:56.914551 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:57.148141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:57.157566 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:57.157942 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:57.416641 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:57.648140 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:57.660149 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:57.660640 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:57.915455 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:58.147559 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:58.160529 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:58.177002 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:58.304060 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:49:58.416031 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:58.648682 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:58.672050 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:58.674149 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:58.922172 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:59.147756 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:59.158577 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:59.159836 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:59.415625 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:49:59.648010 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:49:59.658223 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:49:59.659456 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:49:59.915004 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:00.164670 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:00.198528 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:00.199557 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:00.323921 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:00.416455 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:00.648232 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:00.657488 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:00.658744 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:00.915460 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:01.149252 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:01.157647 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:01.161103 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:01.414880 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:01.648683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:01.659400 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:01.661221 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:01.916730 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:02.148495 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:02.159084 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:02.161966 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:02.416110 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:02.649265 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:02.664915 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:02.748769 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:02.804883 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:02.916421 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:03.147313 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:03.157213 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:03.158392 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:03.414646 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:03.647627 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:03.655905 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:03.658797 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:03.915141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:04.147380 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:04.156766 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:04.158972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:04.415888 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:04.647693 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:04.656924 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:04.658768 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:04.809345 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:04.915536 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:05.148111 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:05.157358 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:05.158664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:05.414682 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:05.647809 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:05.655896 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:05.657025 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:05.915088 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:06.147142 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:06.159194 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:06.160717 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:06.414474 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:06.648792 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:06.659470 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:06.660449 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:06.816709 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:06.924927 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:07.147847 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:07.156479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:07.158110 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:07.416086 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:07.649971 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:07.660497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:07.661215 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:07.917144 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:08.148598 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:08.163674 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:08.165481 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:08.418083 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:08.650047 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:08.658937 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:08.661579 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:08.819092 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:08.917617 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:09.148748 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:09.159506 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:09.160510 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:09.415751 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:09.647839 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:09.665350 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:09.665763 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:09.915512 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:10.147961 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:10.157667 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:10.160397 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:10.415060 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:10.647433 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:10.661223 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:10.662534 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:10.915581 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:11.148260 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:11.155989 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:11.158681 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:11.304379 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:11.414867 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:11.647687 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:11.657131 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:11.658504 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:11.915081 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:12.147960 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:12.157334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:12.157586 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:12.415111 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:12.648110 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:12.657179 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:12.658063 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:12.934973 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:13.148813 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:13.158799 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:13.160338 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:13.306985 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:13.415315 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:13.650667 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:13.664033 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:13.665775 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:13.915447 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:14.147595 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:14.155975 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:14.158334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:14.415579 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:14.647580 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:14.656231 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:50:14.658440 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:14.914862 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:15.147369 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:15.157895 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:15.158699 1469207 kapi.go:107] duration metric: took 1m7.506429472s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 23:50:15.414437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:15.647489 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:15.658110 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:15.804539 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:15.915255 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:16.151792 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:16.159105 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:16.415286 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:16.647966 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:16.658783 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:16.915199 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:17.148417 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:17.157823 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:17.415214 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:17.650098 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:17.664847 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:17.918027 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:18.148467 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:18.157448 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:18.306253 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:18.415686 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:18.648862 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:18.658263 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:18.916141 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:19.148077 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:19.158531 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:19.415042 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:19.656155 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:19.664191 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:19.915215 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:20.150485 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:20.159489 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:20.306692 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:20.416029 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:20.648683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:20.658780 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:20.915562 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:21.148743 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:21.159591 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:21.415020 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:21.647382 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:21.658579 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:21.915282 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:22.148104 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:22.158971 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:22.415033 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:22.648479 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:22.657004 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:22.803121 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:22.914699 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:23.148387 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:23.157593 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:23.414706 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:23.650946 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:23.657839 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:23.915700 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:24.149670 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:24.166450 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:24.416611 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:24.649440 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:24.658081 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:24.803281 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:24.916429 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:25.148608 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:25.159738 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:25.415701 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:25.649784 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:25.658050 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:25.916564 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:26.149974 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:26.159378 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:26.415058 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:26.647917 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:26.657883 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:26.803945 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:26.915612 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:27.149497 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:27.158245 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:27.415206 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:27.653026 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:27.661771 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:27.916174 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:28.148878 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:28.158281 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:28.415095 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:28.651407 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:28.659622 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:28.811654 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:28.916428 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:29.152267 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:29.157544 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:29.414627 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:29.647256 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:29.657203 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:29.915438 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:30.147773 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:30.158782 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:30.415454 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:30.647025 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:30.658530 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:30.914437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:31.148247 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:31.158159 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:31.304123 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:31.414638 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:31.648536 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:31.659534 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:31.915314 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:32.154029 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:32.158784 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:32.426509 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:32.647574 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:32.657692 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:32.914786 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:33.169886 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:33.171478 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:33.309631 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:33.415410 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:33.648344 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:33.657972 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:33.914461 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:34.153113 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:34.162749 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:34.414982 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:34.648093 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:34.658159 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:34.915611 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:35.166664 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:35.172492 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:35.414555 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:35.648740 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:35.657819 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:35.803951 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:35.914248 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:36.152004 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:36.158861 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:36.415520 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:36.647599 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:36.657612 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:36.915332 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:37.148176 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:37.157950 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:37.416366 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:37.649728 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:37.657588 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:37.804876 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:37.915097 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:38.152472 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:38.160034 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:38.415437 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:38.647342 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:38.658089 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:38.915576 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:39.148215 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:39.157992 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:39.418560 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:39.647390 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:39.658942 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:39.915061 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:40.150209 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:40.157407 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:40.308511 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:40.415845 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:40.648038 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:40.658402 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:40.915759 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:41.147726 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:41.157654 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:41.415228 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:41.648140 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:41.658279 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:41.914645 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:42.156214 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:42.160082 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:42.421189 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:42.648758 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:42.658308 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:42.803644 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:42.915072 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:43.153411 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:43.158099 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:43.415420 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:43.646990 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:43.657924 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:43.914761 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:44.148044 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:44.158238 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:44.415187 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:44.648942 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:44.660232 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:44.804205 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:44.916058 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:45.149527 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:45.163165 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:45.418421 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:45.647534 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:45.657643 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:45.914858 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:46.147820 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:50:46.160817 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:46.417903 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:46.648262 1469207 kapi.go:107] duration metric: took 1m38.505738227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 23:50:46.658108 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:46.804744 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:46.914514 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:47.157694 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:47.415994 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:47.658095 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:47.914584 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:48.158389 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:48.415256 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:48.657625 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:48.915362 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:49.158029 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:49.303497 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:49.415513 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:49.657642 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:49.915661 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:50.157349 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:50.415426 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:50.658855 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:50.914765 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:51.157846 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:51.303610 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:51.414352 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:51.657963 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:51.914735 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:52.157977 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:52.414673 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:52.657939 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:52.915641 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:53.158517 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:53.306028 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:53.415683 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:53.658962 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:53.914720 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:54.157807 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:54.415829 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:54.658682 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:54.916087 1469207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:50:55.157738 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:55.415052 1469207 kapi.go:107] duration metric: took 1m42.503997077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 23:50:55.417475 1469207 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-902832 cluster.
	I1001 23:50:55.419895 1469207 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 23:50:55.422370 1469207 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 23:50:55.659778 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:55.805090 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:56.163037 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:56.658954 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:57.159167 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:57.657881 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:58.158821 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:58.302897 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:50:58.657715 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:59.159350 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:50:59.663334 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:00.180356 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:00.306853 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:00.659762 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:01.159263 1469207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:51:01.659375 1469207 kapi.go:107] duration metric: took 1m54.006175965s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 23:51:01.662952 1469207 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1001 23:51:01.665528 1469207 addons.go:510] duration metric: took 2m0.026921338s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1001 23:51:02.803550 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:04.807919 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:07.303634 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:09.304093 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:11.803740 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:14.303218 1469207 pod_ready.go:103] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"False"
	I1001 23:51:16.303306 1469207 pod_ready.go:93] pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace has status "Ready":"True"
	I1001 23:51:16.303381 1469207 pod_ready.go:82] duration metric: took 1m27.006354091s for pod "metrics-server-84c5f94fbc-78xch" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.303400 1469207 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.308891 1469207 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace has status "Ready":"True"
	I1001 23:51:16.308918 1469207 pod_ready.go:82] duration metric: took 5.507726ms for pod "nvidia-device-plugin-daemonset-zz9mg" in "kube-system" namespace to be "Ready" ...
	I1001 23:51:16.308940 1469207 pod_ready.go:39] duration metric: took 1m30.429530912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:51:16.308957 1469207 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:51:16.308992 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:16.309056 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:16.364413 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:16.364488 1469207 cri.go:89] found id: ""
	I1001 23:51:16.364512 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:16.364604 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.368371 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:16.368448 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:16.413247 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:16.413270 1469207 cri.go:89] found id: ""
	I1001 23:51:16.413278 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:16.413359 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.416842 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:16.416958 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:16.460115 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:16.460138 1469207 cri.go:89] found id: ""
	I1001 23:51:16.460146 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:16.460202 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.463786 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:16.463861 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:16.510372 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:16.510396 1469207 cri.go:89] found id: ""
	I1001 23:51:16.510404 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:16.510474 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.515168 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:16.515312 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:16.564144 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:16.564172 1469207 cri.go:89] found id: ""
	I1001 23:51:16.564180 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:16.564247 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.568189 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:16.568258 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:16.610285 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:16.610313 1469207 cri.go:89] found id: ""
	I1001 23:51:16.610321 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:16.610387 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.614026 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:16.614099 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:16.656668 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:16.656690 1469207 cri.go:89] found id: ""
	I1001 23:51:16.656698 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:16.656819 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:16.660678 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:16.660752 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:16.706396 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:16.706427 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:16.761613 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:16.761649 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:16.861708 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:16.861742 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:16.878452 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:16.878491 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:16.924996 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:16.925030 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:16.992363 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:16.992401 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:17.037242 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:17.037272 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:17.118783 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:17.118831 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:17.160367 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:17.160397 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:17.221184 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:17.221214 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:17.286598 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.286852 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287041 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.287309 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287500 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.287731 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.287924 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288150 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.288332 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288554 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.288737 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.288960 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.289139 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.289363 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.289535 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.289747 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:17.329800 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:17.329832 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:17.550248 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:17.550282 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:17.550352 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:17.550367 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.550374 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.550382 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:17.550388 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:17.550394 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:17.550516 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:17.550532 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:27.551603 1469207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:51:27.565363 1469207 api_server.go:72] duration metric: took 2m25.927024032s to wait for apiserver process to appear ...
	I1001 23:51:27.565388 1469207 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:51:27.565423 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:27.565482 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:27.606037 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:27.606059 1469207 cri.go:89] found id: ""
	I1001 23:51:27.606067 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:27.606126 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.609639 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:27.609710 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:27.647251 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:27.647276 1469207 cri.go:89] found id: ""
	I1001 23:51:27.647284 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:27.647344 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.650919 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:27.650990 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:27.690339 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:27.690371 1469207 cri.go:89] found id: ""
	I1001 23:51:27.690379 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:27.690436 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.694002 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:27.694101 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:27.737387 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:27.737417 1469207 cri.go:89] found id: ""
	I1001 23:51:27.737427 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:27.737494 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.741134 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:27.741209 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:27.781872 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:27.781893 1469207 cri.go:89] found id: ""
	I1001 23:51:27.781900 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:27.781955 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.785422 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:27.785497 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:27.824617 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:27.824639 1469207 cri.go:89] found id: ""
	I1001 23:51:27.824647 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:27.824704 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.828268 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:27.828338 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:27.869395 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:27.869467 1469207 cri.go:89] found id: ""
	I1001 23:51:27.869483 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:27.869556 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:27.873017 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:27.873045 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:27.889954 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:27.889983 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:28.024160 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:28.024193 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:28.092269 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:28.092303 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:28.141257 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:28.141294 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:28.234950 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:28.234989 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:28.307921 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:28.307952 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:28.375164 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.375422 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.375611 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.375842 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376033 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.376260 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376452 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.376678 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.376859 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377082 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.377263 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377492 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.377671 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.377892 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.378065 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.378278 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:28.418958 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:28.418986 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:28.472368 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:28.472406 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:28.516665 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:28.516694 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:28.554784 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:28.554819 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:28.646942 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:28.646975 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:28.694269 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:28.694348 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:28.694427 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:28.694585 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.694640 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.694763 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:28.694814 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:28.694848 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:28.694882 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:28.694906 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:38.696816 1469207 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 23:51:38.705548 1469207 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 23:51:38.706569 1469207 api_server.go:141] control plane version: v1.31.1
	I1001 23:51:38.706597 1469207 api_server.go:131] duration metric: took 11.141200418s to wait for apiserver health ...
	I1001 23:51:38.706617 1469207 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:51:38.706638 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 23:51:38.706704 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 23:51:38.747460 1469207 cri.go:89] found id: "ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:38.747486 1469207 cri.go:89] found id: ""
	I1001 23:51:38.747493 1469207 logs.go:282] 1 containers: [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7]
	I1001 23:51:38.747549 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.751078 1469207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 23:51:38.751156 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 23:51:38.789076 1469207 cri.go:89] found id: "2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:38.789100 1469207 cri.go:89] found id: ""
	I1001 23:51:38.789108 1469207 logs.go:282] 1 containers: [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575]
	I1001 23:51:38.789199 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.792673 1469207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 23:51:38.792775 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 23:51:38.834449 1469207 cri.go:89] found id: "6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:38.834472 1469207 cri.go:89] found id: ""
	I1001 23:51:38.834480 1469207 logs.go:282] 1 containers: [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d]
	I1001 23:51:38.834539 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.837974 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 23:51:38.838054 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 23:51:38.876387 1469207 cri.go:89] found id: "294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:38.876411 1469207 cri.go:89] found id: ""
	I1001 23:51:38.876419 1469207 logs.go:282] 1 containers: [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178]
	I1001 23:51:38.876472 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.881038 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 23:51:38.881128 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 23:51:38.919558 1469207 cri.go:89] found id: "18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:38.919577 1469207 cri.go:89] found id: ""
	I1001 23:51:38.919584 1469207 logs.go:282] 1 containers: [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93]
	I1001 23:51:38.919641 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.923618 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 23:51:38.923694 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 23:51:38.960505 1469207 cri.go:89] found id: "1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:38.960523 1469207 cri.go:89] found id: ""
	I1001 23:51:38.960531 1469207 logs.go:282] 1 containers: [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef]
	I1001 23:51:38.960594 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:38.964424 1469207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 23:51:38.964494 1469207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 23:51:39.003753 1469207 cri.go:89] found id: "51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:39.003828 1469207 cri.go:89] found id: ""
	I1001 23:51:39.003852 1469207 logs.go:282] 1 containers: [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699]
	I1001 23:51:39.003939 1469207 ssh_runner.go:195] Run: which crictl
	I1001 23:51:39.009596 1469207 logs.go:123] Gathering logs for kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] ...
	I1001 23:51:39.009623 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef"
	I1001 23:51:39.080079 1469207 logs.go:123] Gathering logs for kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] ...
	I1001 23:51:39.080120 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699"
	I1001 23:51:39.130950 1469207 logs.go:123] Gathering logs for container status ...
	I1001 23:51:39.130976 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 23:51:39.193927 1469207 logs.go:123] Gathering logs for kubelet ...
	I1001 23:51:39.193959 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 23:51:39.256642 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.807315    1485 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.256899 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.807370    1485 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257092 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.816094    1485 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.257324 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.816145    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257512 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.821938    1485 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.257741 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.821988    1485 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.257930 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829571    1485 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258159 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829620    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.258341 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829787    1485 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258564 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.829815    1485 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.258751 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.829992    1485 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.258976 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.259155 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.259422 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.259598 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.259815 1469207 logs.go:138] Found kubelet problem: Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:39.301276 1469207 logs.go:123] Gathering logs for describe nodes ...
	I1001 23:51:39.301305 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 23:51:39.446748 1469207 logs.go:123] Gathering logs for etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] ...
	I1001 23:51:39.446785 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575"
	I1001 23:51:39.504821 1469207 logs.go:123] Gathering logs for kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] ...
	I1001 23:51:39.504855 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93"
	I1001 23:51:39.547657 1469207 logs.go:123] Gathering logs for CRI-O ...
	I1001 23:51:39.547686 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 23:51:39.644920 1469207 logs.go:123] Gathering logs for dmesg ...
	I1001 23:51:39.644956 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 23:51:39.661786 1469207 logs.go:123] Gathering logs for kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] ...
	I1001 23:51:39.661817 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7"
	I1001 23:51:39.720069 1469207 logs.go:123] Gathering logs for coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] ...
	I1001 23:51:39.720103 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d"
	I1001 23:51:39.767501 1469207 logs.go:123] Gathering logs for kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] ...
	I1001 23:51:39.767530 1469207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178"
	I1001 23:51:39.831378 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:39.831411 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 23:51:39.831493 1469207 out.go:270] X Problems detected in kubelet:
	W1001 23:51:39.831509 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.830021    1485 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.831533 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.841560    1485 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-902832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.831544 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.841611    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	W1001 23:51:39.831552 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: W1001 23:49:45.847399    1485 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-902832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-902832' and this object
	W1001 23:51:39.831557 1469207 out.go:270]   Oct 01 23:49:45 addons-902832 kubelet[1485]: E1001 23:49:45.847447    1485 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-902832\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-902832' and this object" logger="UnhandledError"
	I1001 23:51:39.831599 1469207 out.go:358] Setting ErrFile to fd 2...
	I1001 23:51:39.831608 1469207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:51:49.844865 1469207 system_pods.go:59] 18 kube-system pods found
	I1001 23:51:49.844902 1469207 system_pods.go:61] "coredns-7c65d6cfc9-xljjm" [e0ad2956-c010-4fc7-b0d8-4d32b01451d8] Running
	I1001 23:51:49.844908 1469207 system_pods.go:61] "csi-hostpath-attacher-0" [ac852b7e-ae3b-469b-8187-4e7defd56346] Running
	I1001 23:51:49.844913 1469207 system_pods.go:61] "csi-hostpath-resizer-0" [c3e31778-df3b-462e-a4be-109b7954b782] Running
	I1001 23:51:49.844917 1469207 system_pods.go:61] "csi-hostpathplugin-65tpx" [a4743192-4d2a-4c3a-8ee9-46fad74b784b] Running
	I1001 23:51:49.844921 1469207 system_pods.go:61] "etcd-addons-902832" [29071b69-21dc-4c9b-b469-4d667f3eaad8] Running
	I1001 23:51:49.844925 1469207 system_pods.go:61] "kindnet-frb7r" [ab2734fa-ca9d-47b1-a3d9-d34e0e0fb55f] Running
	I1001 23:51:49.844928 1469207 system_pods.go:61] "kube-apiserver-addons-902832" [b9f460d1-7581-4b09-8b2e-646bd2a89859] Running
	I1001 23:51:49.844932 1469207 system_pods.go:61] "kube-controller-manager-addons-902832" [f0e7e114-9900-415d-a36b-c19f1ccb1e4e] Running
	I1001 23:51:49.844936 1469207 system_pods.go:61] "kube-ingress-dns-minikube" [3f10c5a6-50e8-49a4-8cad-a06c995525bd] Running
	I1001 23:51:49.844940 1469207 system_pods.go:61] "kube-proxy-kx8p9" [8619925a-3b0d-41d1-847a-23f287f14b34] Running
	I1001 23:51:49.844944 1469207 system_pods.go:61] "kube-scheduler-addons-902832" [e29eb860-afff-44b0-8e7d-717180fbff55] Running
	I1001 23:51:49.844948 1469207 system_pods.go:61] "metrics-server-84c5f94fbc-78xch" [9a1268e4-5691-4653-93b1-c7a18c5734b5] Running
	I1001 23:51:49.844952 1469207 system_pods.go:61] "nvidia-device-plugin-daemonset-zz9mg" [18ac45a3-6b0c-4535-a78d-cc801c2d3d20] Running
	I1001 23:51:49.844956 1469207 system_pods.go:61] "registry-66c9cd494c-wt4tb" [89b4caf4-80a6-4169-98c5-1a6ccdd606c0] Running
	I1001 23:51:49.844960 1469207 system_pods.go:61] "registry-proxy-8h2cr" [de013b46-27a0-473a-9c80-20d0ffeaaa75] Running
	I1001 23:51:49.844964 1469207 system_pods.go:61] "snapshot-controller-56fcc65765-6sfbh" [6ab5415b-4d25-411c-b95c-4c348f8b8b01] Running
	I1001 23:51:49.844969 1469207 system_pods.go:61] "snapshot-controller-56fcc65765-8d7bz" [42ef9a62-c0ee-4ed2-8516-18421d7e01bf] Running
	I1001 23:51:49.844973 1469207 system_pods.go:61] "storage-provisioner" [5d5990fa-0392-44eb-af89-06f613fee5f9] Running
	I1001 23:51:49.844979 1469207 system_pods.go:74] duration metric: took 11.138355663s to wait for pod list to return data ...
	I1001 23:51:49.844993 1469207 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:51:49.847912 1469207 default_sa.go:45] found service account: "default"
	I1001 23:51:49.847937 1469207 default_sa.go:55] duration metric: took 2.937645ms for default service account to be created ...
	I1001 23:51:49.847946 1469207 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:51:49.858831 1469207 system_pods.go:86] 18 kube-system pods found
	I1001 23:51:49.858868 1469207 system_pods.go:89] "coredns-7c65d6cfc9-xljjm" [e0ad2956-c010-4fc7-b0d8-4d32b01451d8] Running
	I1001 23:51:49.858876 1469207 system_pods.go:89] "csi-hostpath-attacher-0" [ac852b7e-ae3b-469b-8187-4e7defd56346] Running
	I1001 23:51:49.858881 1469207 system_pods.go:89] "csi-hostpath-resizer-0" [c3e31778-df3b-462e-a4be-109b7954b782] Running
	I1001 23:51:49.858886 1469207 system_pods.go:89] "csi-hostpathplugin-65tpx" [a4743192-4d2a-4c3a-8ee9-46fad74b784b] Running
	I1001 23:51:49.858890 1469207 system_pods.go:89] "etcd-addons-902832" [29071b69-21dc-4c9b-b469-4d667f3eaad8] Running
	I1001 23:51:49.858897 1469207 system_pods.go:89] "kindnet-frb7r" [ab2734fa-ca9d-47b1-a3d9-d34e0e0fb55f] Running
	I1001 23:51:49.858901 1469207 system_pods.go:89] "kube-apiserver-addons-902832" [b9f460d1-7581-4b09-8b2e-646bd2a89859] Running
	I1001 23:51:49.858906 1469207 system_pods.go:89] "kube-controller-manager-addons-902832" [f0e7e114-9900-415d-a36b-c19f1ccb1e4e] Running
	I1001 23:51:49.858911 1469207 system_pods.go:89] "kube-ingress-dns-minikube" [3f10c5a6-50e8-49a4-8cad-a06c995525bd] Running
	I1001 23:51:49.858915 1469207 system_pods.go:89] "kube-proxy-kx8p9" [8619925a-3b0d-41d1-847a-23f287f14b34] Running
	I1001 23:51:49.858921 1469207 system_pods.go:89] "kube-scheduler-addons-902832" [e29eb860-afff-44b0-8e7d-717180fbff55] Running
	I1001 23:51:49.858925 1469207 system_pods.go:89] "metrics-server-84c5f94fbc-78xch" [9a1268e4-5691-4653-93b1-c7a18c5734b5] Running
	I1001 23:51:49.858930 1469207 system_pods.go:89] "nvidia-device-plugin-daemonset-zz9mg" [18ac45a3-6b0c-4535-a78d-cc801c2d3d20] Running
	I1001 23:51:49.858939 1469207 system_pods.go:89] "registry-66c9cd494c-wt4tb" [89b4caf4-80a6-4169-98c5-1a6ccdd606c0] Running
	I1001 23:51:49.858951 1469207 system_pods.go:89] "registry-proxy-8h2cr" [de013b46-27a0-473a-9c80-20d0ffeaaa75] Running
	I1001 23:51:49.858959 1469207 system_pods.go:89] "snapshot-controller-56fcc65765-6sfbh" [6ab5415b-4d25-411c-b95c-4c348f8b8b01] Running
	I1001 23:51:49.858963 1469207 system_pods.go:89] "snapshot-controller-56fcc65765-8d7bz" [42ef9a62-c0ee-4ed2-8516-18421d7e01bf] Running
	I1001 23:51:49.858967 1469207 system_pods.go:89] "storage-provisioner" [5d5990fa-0392-44eb-af89-06f613fee5f9] Running
	I1001 23:51:49.858975 1469207 system_pods.go:126] duration metric: took 11.022846ms to wait for k8s-apps to be running ...
	I1001 23:51:49.858987 1469207 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:51:49.859049 1469207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:51:49.872588 1469207 system_svc.go:56] duration metric: took 13.590851ms WaitForService to wait for kubelet
	I1001 23:51:49.872618 1469207 kubeadm.go:582] duration metric: took 2m48.234283937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:51:49.872637 1469207 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:51:49.876175 1469207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 23:51:49.876222 1469207 node_conditions.go:123] node cpu capacity is 2
	I1001 23:51:49.876245 1469207 node_conditions.go:105] duration metric: took 3.597519ms to run NodePressure ...
	I1001 23:51:49.876258 1469207 start.go:241] waiting for startup goroutines ...
	I1001 23:51:49.876271 1469207 start.go:246] waiting for cluster config update ...
	I1001 23:51:49.876288 1469207 start.go:255] writing updated cluster config ...
	I1001 23:51:49.876602 1469207 ssh_runner.go:195] Run: rm -f paused
	I1001 23:51:50.270094 1469207 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:51:50.272431 1469207 out.go:177] * Done! kubectl is now configured to use "addons-902832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:04:15 addons-902832 crio[964]: time="2024-10-02 00:04:15.108460953Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-4zl65 Namespace:ingress-nginx ID:1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330 UID:e2a2bbb7-d467-41f5-9024-c3b53775fb94 NetNS:/var/run/netns/cce80509-3cd5-46ac-9f15-554ee7c0e353 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 00:04:15 addons-902832 crio[964]: time="2024-10-02 00:04:15.108613163Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-4zl65 from CNI network \"kindnet\" (type=ptp)"
	Oct 02 00:04:15 addons-902832 crio[964]: time="2024-10-02 00:04:15.133076413Z" level=info msg="Stopped pod sandbox: 1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330" id=1d073d32-ca51-413d-9f12-e64730e3e8f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:15 addons-902832 crio[964]: time="2024-10-02 00:04:15.294402403Z" level=info msg="Removing container: d91bc54e80c0c0f76e3c2f47cbef3c41f6e8fab617f1f85c84866140b4b64504" id=581f0a78-f0d8-40e1-a0d0-945037e3db6c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:15 addons-902832 crio[964]: time="2024-10-02 00:04:15.311090233Z" level=info msg="Removed container d91bc54e80c0c0f76e3c2f47cbef3c41f6e8fab617f1f85c84866140b4b64504: ingress-nginx/ingress-nginx-controller-bc57996ff-4zl65/controller" id=581f0a78-f0d8-40e1-a0d0-945037e3db6c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.955593478Z" level=info msg="Removing container: bcfa00852ce42758731cdacb16950b513695d58dd91363dc26f9e6a894df15d3" id=d5da7a9f-e51e-4fc6-a969-a494b249fb9e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.972111032Z" level=info msg="Removed container bcfa00852ce42758731cdacb16950b513695d58dd91363dc26f9e6a894df15d3: ingress-nginx/ingress-nginx-admission-patch-vnf2m/patch" id=d5da7a9f-e51e-4fc6-a969-a494b249fb9e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.973298326Z" level=info msg="Removing container: fb83cab1649d565c4bbc7161ca5f6a55aaeed4322c951e7533d981ba050c5013" id=358d00fc-939c-4c18-81ca-8fb9d1a06ce1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.992581253Z" level=info msg="Removed container fb83cab1649d565c4bbc7161ca5f6a55aaeed4322c951e7533d981ba050c5013: ingress-nginx/ingress-nginx-admission-create-zbclz/create" id=358d00fc-939c-4c18-81ca-8fb9d1a06ce1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.993850950Z" level=info msg="Stopping pod sandbox: 1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330" id=d7c1ea81-82c1-42fa-b660-680cfe3c3e68 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.993890244Z" level=info msg="Stopped pod sandbox (already stopped): 1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330" id=d7c1ea81-82c1-42fa-b660-680cfe3c3e68 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:56 addons-902832 crio[964]: time="2024-10-02 00:04:56.994243630Z" level=info msg="Removing pod sandbox: 1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330" id=a0c408b4-2bbd-40b4-842e-fa14b91c7004 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.004618422Z" level=info msg="Removed pod sandbox: 1e1a56981195c3c7a5fdea4439dd0658a439a0674b4c79d5aab1eaf0a2d6e330" id=a0c408b4-2bbd-40b4-842e-fa14b91c7004 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.006978997Z" level=info msg="Stopping pod sandbox: b05db13c862912ede2e03976e60f9c84834a6a90be18fa72b0eec9dd2a6c8b9f" id=448ac6e4-df79-4853-8469-40ac278d4a2b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.007057264Z" level=info msg="Stopped pod sandbox (already stopped): b05db13c862912ede2e03976e60f9c84834a6a90be18fa72b0eec9dd2a6c8b9f" id=448ac6e4-df79-4853-8469-40ac278d4a2b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.008256013Z" level=info msg="Removing pod sandbox: b05db13c862912ede2e03976e60f9c84834a6a90be18fa72b0eec9dd2a6c8b9f" id=67a59f93-82e5-4da3-8671-eae403cb587c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.019090215Z" level=info msg="Removed pod sandbox: b05db13c862912ede2e03976e60f9c84834a6a90be18fa72b0eec9dd2a6c8b9f" id=67a59f93-82e5-4da3-8671-eae403cb587c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.019903217Z" level=info msg="Stopping pod sandbox: d90c21a180c2a9335270ac0569ff3601ee398ebe8e31ee642675dd01d3329f62" id=02bb3ba9-2469-48e4-80de-94829f653d24 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.019943684Z" level=info msg="Stopped pod sandbox (already stopped): d90c21a180c2a9335270ac0569ff3601ee398ebe8e31ee642675dd01d3329f62" id=02bb3ba9-2469-48e4-80de-94829f653d24 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.020221617Z" level=info msg="Removing pod sandbox: d90c21a180c2a9335270ac0569ff3601ee398ebe8e31ee642675dd01d3329f62" id=f770c28c-cc63-43ae-89f7-c9cf824a2ee5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.030768089Z" level=info msg="Removed pod sandbox: d90c21a180c2a9335270ac0569ff3601ee398ebe8e31ee642675dd01d3329f62" id=f770c28c-cc63-43ae-89f7-c9cf824a2ee5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.032463367Z" level=info msg="Stopping pod sandbox: 8e8e0324c05584d22215dddd6521e5b1fb03170586149e8f3f6615f095061bfb" id=aa657be1-077a-4b5d-b34c-f9844e3626d1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.032507124Z" level=info msg="Stopped pod sandbox (already stopped): 8e8e0324c05584d22215dddd6521e5b1fb03170586149e8f3f6615f095061bfb" id=aa657be1-077a-4b5d-b34c-f9844e3626d1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.032852708Z" level=info msg="Removing pod sandbox: 8e8e0324c05584d22215dddd6521e5b1fb03170586149e8f3f6615f095061bfb" id=eae7816f-efe0-4b3d-84b3-aa5a065da09b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 00:04:57 addons-902832 crio[964]: time="2024-10-02 00:04:57.043065903Z" level=info msg="Removed pod sandbox: 8e8e0324c05584d22215dddd6521e5b1fb03170586149e8f3f6615f095061bfb" id=eae7816f-efe0-4b3d-84b3-aa5a065da09b name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b22bf3c114895       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   c302ae9ca8d20       hello-world-app-55bf9c44b4-27hwm
	c5e8e938acd48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     4 minutes ago       Running             busybox                   0                   c41d3fbd2ba33       busybox
	efbdf0c5ab72f       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         5 minutes ago       Running             nginx                     0                   0b7cbc606bdee       nginx
	d19b81e8f59f8       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Running             metrics-server            0                   ff26a80df1086       metrics-server-84c5f94fbc-78xch
	6b659db8e497d       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        17 minutes ago      Running             coredns                   0                   d5318fa4e0e8d       coredns-7c65d6cfc9-xljjm
	3364809d715c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        17 minutes ago      Running             storage-provisioner       0                   3685e59a1c422       storage-provisioner
	51eadcd4b4318       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   a7295f4fba74d       kindnet-frb7r
	18f058a3c9bdb       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   58c2c597c44bd       kube-proxy-kx8p9
	ea09c31646705       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        18 minutes ago      Running             kube-apiserver            0                   01d76498457eb       kube-apiserver-addons-902832
	1960bfd78af26       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        18 minutes ago      Running             kube-controller-manager   0                   d6cbe00b0bfa9       kube-controller-manager-addons-902832
	294331fdf9590       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        18 minutes ago      Running             kube-scheduler            0                   6c609d281447f       kube-scheduler-addons-902832
	2ba267277f9df       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        18 minutes ago      Running             etcd                      0                   44caa6b3912c3       etcd-addons-902832
	
	
	==> coredns [6b659db8e497d6ba6b68cb1a9eb13afcaf93745d23628ef27ffc09546970bf9d] <==
	[INFO] 10.244.0.20:39154 - 28834 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00087371s
	[INFO] 10.244.0.20:39154 - 31003 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000922169s
	[INFO] 10.244.0.20:45531 - 59274 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002535824s
	[INFO] 10.244.0.20:39154 - 49144 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002065091s
	[INFO] 10.244.0.20:45531 - 39634 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002131518s
	[INFO] 10.244.0.20:45531 - 50382 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000184266s
	[INFO] 10.244.0.20:39154 - 32097 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098541s
	[INFO] 10.244.0.20:47518 - 55102 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000185202s
	[INFO] 10.244.0.20:51710 - 61243 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000336968s
	[INFO] 10.244.0.20:47518 - 10131 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000163221s
	[INFO] 10.244.0.20:47518 - 24323 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000280773s
	[INFO] 10.244.0.20:51710 - 12494 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000172854s
	[INFO] 10.244.0.20:47518 - 57504 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069562s
	[INFO] 10.244.0.20:51710 - 49364 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056483s
	[INFO] 10.244.0.20:47518 - 65497 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068364s
	[INFO] 10.244.0.20:51710 - 53222 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053414s
	[INFO] 10.244.0.20:47518 - 21247 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038539s
	[INFO] 10.244.0.20:51710 - 61809 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035774s
	[INFO] 10.244.0.20:51710 - 15171 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006029s
	[INFO] 10.244.0.20:47518 - 40631 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002092003s
	[INFO] 10.244.0.20:51710 - 59030 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002128384s
	[INFO] 10.244.0.20:47518 - 26004 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002042478s
	[INFO] 10.244.0.20:47518 - 60824 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092806s
	[INFO] 10.244.0.20:51710 - 38285 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002217079s
	[INFO] 10.244.0.20:51710 - 43633 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000097015s
	
	
	==> describe nodes <==
	Name:               addons-902832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-902832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-902832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_48_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-902832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-902832
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:04:35 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:04:35 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:04:35 +0000   Tue, 01 Oct 2024 23:48:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:04:35 +0000   Tue, 01 Oct 2024 23:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-902832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 61bc9dacefd548c8b2fdd23884b39f6c
	  System UUID:                0a0a3c90-92d5-433f-a6ea-4aa243645a16
	  Boot ID:                    9260520d-e63f-40a7-a450-76e3284bd194
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-27hwm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 coredns-7c65d6cfc9-xljjm                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-addons-902832                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-frb7r                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-902832             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-902832    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-kx8p9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-902832             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-84c5f94fbc-78xch          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  18m (x2 over 18m)  kubelet          Node addons-902832 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x2 over 18m)  kubelet          Node addons-902832 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x2 over 18m)  kubelet          Node addons-902832 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node addons-902832 event: Registered Node addons-902832 in Controller
	  Normal   NodeReady                17m                kubelet          Node addons-902832 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [2ba267277f9dfd5afd83cdd740d87d7211acf5d4a7756684425526574f45c575] <==
	{"level":"info","ts":"2024-10-01T23:48:51.643207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:48:51.643318Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:48:51.643386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:48:51.643487Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:48:51.643542Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:49:02.339621Z","caller":"traceutil/trace.go:171","msg":"trace[1362621792] linearizableReadLoop","detail":"{readStateIndex:342; appliedIndex:341; }","duration":"153.781877ms","start":"2024-10-01T23:49:02.185821Z","end":"2024-10-01T23:49:02.339602Z","steps":["trace[1362621792] 'read index received'  (duration: 114.918247ms)","trace[1362621792] 'applied index is now lower than readState.Index'  (duration: 38.862998ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T23:49:02.348430Z","caller":"traceutil/trace.go:171","msg":"trace[1602327104] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"232.522348ms","start":"2024-10-01T23:49:02.115875Z","end":"2024-10-01T23:49:02.348397Z","steps":["trace[1602327104] 'process raft request'  (duration: 184.858039ms)","trace[1602327104] 'compare'  (duration: 38.779127ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T23:49:02.355921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.982777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T23:49:02.367582Z","caller":"traceutil/trace.go:171","msg":"trace[1751064291] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:334; }","duration":"181.751338ms","start":"2024-10-01T23:49:02.185817Z","end":"2024-10-01T23:49:02.367568Z","steps":["trace[1751064291] 'agreement among raft nodes before linearized reading'  (duration: 169.955709ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:02.376838Z","caller":"traceutil/trace.go:171","msg":"trace[299981994] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"127.208659ms","start":"2024-10-01T23:49:02.249592Z","end":"2024-10-01T23:49:02.376800Z","steps":["trace[299981994] 'process raft request'  (duration: 117.913615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T23:49:02.612925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.163215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-frb7r\" ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2024-10-01T23:49:02.614732Z","caller":"traceutil/trace.go:171","msg":"trace[581668595] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-frb7r; range_end:; response_count:1; response_revision:340; }","duration":"145.980731ms","start":"2024-10-01T23:49:02.468738Z","end":"2024-10-01T23:49:02.614719Z","steps":["trace[581668595] 'agreement among raft nodes before linearized reading'  (duration: 144.117448ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:02.643702Z","caller":"traceutil/trace.go:171","msg":"trace[1591692366] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"105.588128ms","start":"2024-10-01T23:49:02.533069Z","end":"2024-10-01T23:49:02.638657Z","steps":["trace[1591692366] 'process raft request'  (duration: 79.736435ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:03.226879Z","caller":"traceutil/trace.go:171","msg":"trace[192823060] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"154.228185ms","start":"2024-10-01T23:49:03.072627Z","end":"2024-10-01T23:49:03.226855Z","steps":["trace[192823060] 'process raft request'  (duration: 153.946009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:49:04.752034Z","caller":"traceutil/trace.go:171","msg":"trace[1215227702] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"100.770764ms","start":"2024-10-01T23:49:04.651245Z","end":"2024-10-01T23:49:04.752015Z","steps":["trace[1215227702] 'process raft request'  (duration: 81.493276ms)","trace[1215227702] 'compare'  (duration: 18.592196ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T23:49:05.602194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.270684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T23:49:05.602249Z","caller":"traceutil/trace.go:171","msg":"trace[756978685] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:373; }","duration":"131.342953ms","start":"2024-10-01T23:49:05.470893Z","end":"2024-10-01T23:49:05.602236Z","steps":["trace[756978685] 'agreement among raft nodes before linearized reading'  (duration: 131.247587ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T23:49:05.602453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.598266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-10-01T23:49:05.602481Z","caller":"traceutil/trace.go:171","msg":"trace[756940475] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:373; }","duration":"131.627541ms","start":"2024-10-01T23:49:05.470847Z","end":"2024-10-01T23:49:05.602475Z","steps":["trace[756940475] 'agreement among raft nodes before linearized reading'  (duration: 131.567407ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:58:52.019225Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-10-01T23:58:52.051092Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1502,"took":"31.35509ms","hash":1230122647,"current-db-size-bytes":6225920,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-01T23:58:52.051142Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1230122647,"revision":1502,"compact-revision":-1}
	{"level":"info","ts":"2024-10-02T00:03:52.025517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1916}
	{"level":"info","ts":"2024-10-02T00:03:52.043649Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1916,"took":"17.508938ms","hash":3762113891,"current-db-size-bytes":6225920,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4333568,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-10-02T00:03:52.043704Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3762113891,"revision":1916,"compact-revision":1502}
	
	
	==> kernel <==
	 00:07:01 up  5:49,  0 users,  load average: 0.19, 0.48, 1.20
	Linux addons-902832 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [51eadcd4b43186000356b49be9a424856e2caad2229bdcddbf191f4885156699] <==
	I1002 00:04:55.367372       1 main.go:299] handling current node
	I1002 00:05:05.360740       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:05.360774       1 main.go:299] handling current node
	I1002 00:05:15.360705       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:15.360737       1 main.go:299] handling current node
	I1002 00:05:25.369749       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:25.369783       1 main.go:299] handling current node
	I1002 00:05:35.360925       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:35.361041       1 main.go:299] handling current node
	I1002 00:05:45.362625       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:45.362663       1 main.go:299] handling current node
	I1002 00:05:55.367413       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:05:55.367450       1 main.go:299] handling current node
	I1002 00:06:05.361427       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:05.361465       1 main.go:299] handling current node
	I1002 00:06:15.360734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:15.360769       1 main.go:299] handling current node
	I1002 00:06:25.367611       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:25.367645       1 main.go:299] handling current node
	I1002 00:06:35.360721       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:35.360755       1 main.go:299] handling current node
	I1002 00:06:45.367359       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:45.367394       1 main.go:299] handling current node
	I1002 00:06:55.367953       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1002 00:06:55.367989       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ea09c316467056f756108cc778a25dc46252a9b9976b4a12b10ba53abfde5ad7] <==
	 > logger="UnhandledError"
	E1001 23:51:21.201238       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.30.3:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.30.3:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.30.3:443: i/o timeout" logger="UnhandledError"
	I1001 23:51:21.230044       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 23:51:21.242498       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1002 00:00:03.704620       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.98.244"}
	E1002 00:00:56.397568       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 00:01:00.368253       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 00:01:28.828093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.828153       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.850230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.851623       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.865967       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.866100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.894955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.897120       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 00:01:28.919175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 00:01:28.919784       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 00:01:29.896531       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 00:01:29.921179       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 00:01:30.015798       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1002 00:01:42.549384       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1002 00:01:43.597110       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1002 00:01:48.160096       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 00:01:48.488270       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.118.197"}
	I1002 00:04:06.659635       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.94.76"}
	
	
	==> kube-controller-manager [1960bfd78af26624ed201d3541bb6638d8d2b55bbd760ce90e5659c05a13d0ef] <==
	I1002 00:04:35.870929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-902832"
	W1002 00:04:57.959800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:04:57.959850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:04:58.515609       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:04:58.515648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:05:00.872965       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:05:00.873008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:05:02.701519       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:05:02.701565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:05:34.235469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:05:34.235511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:05:41.982361       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:05:41.982400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:05:44.778712       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:05:44.778857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:06:02.206628       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:06:02.206689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:06:29.010854       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:06:29.010897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:06:31.846737       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:06:31.846781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:06:36.894823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:06:36.894871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1002 00:06:47.785500       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 00:06:47.785550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [18f058a3c9bdba78ce4306c3a01b32b86cd445f786d408b2d1afc2ce70f87a93] <==
	I1001 23:49:05.891440       1 server_linux.go:66] "Using iptables proxy"
	I1001 23:49:07.001537       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 23:49:07.011298       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:49:07.351838       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 23:49:07.352007       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:49:07.379845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:49:07.380496       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:49:07.380564       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:49:07.436072       1 config.go:328] "Starting node config controller"
	I1001 23:49:07.436108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:49:07.437155       1 config.go:199] "Starting service config controller"
	I1001 23:49:07.437178       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:49:07.437389       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:49:07.437404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:49:07.540120       1 shared_informer.go:320] Caches are synced for node config
	I1001 23:49:07.540255       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:49:07.541172       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [294331fdf959028472164adcd9b7096a050e331f64ec24d0bc13468fe7bec178] <==
	W1001 23:48:54.169836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:48:54.169876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.169925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 23:48:54.172303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.171353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:48:54.172469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.171407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 23:48:54.172561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.174808       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:48:54.174841       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:48:54.982088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:54.982133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:54.996869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:54.996987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.022987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 23:48:55.023157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.025759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:48:55.025942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.204403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:55.204525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.274522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 23:48:55.274660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:48:55.463513       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:48:55.463637       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 23:48:57.147229       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:05:06 addons-902832 kubelet[1485]: E1002 00:05:06.859337    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827506859075659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:16 addons-902832 kubelet[1485]: E1002 00:05:16.862521    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827516862274158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:16 addons-902832 kubelet[1485]: E1002 00:05:16.862561    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827516862274158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:26 addons-902832 kubelet[1485]: E1002 00:05:26.865222    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827526864968335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:26 addons-902832 kubelet[1485]: E1002 00:05:26.865265    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827526864968335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:34 addons-902832 kubelet[1485]: I1002 00:05:34.399552    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 00:05:36 addons-902832 kubelet[1485]: E1002 00:05:36.868246    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827536868022464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:36 addons-902832 kubelet[1485]: E1002 00:05:36.868285    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827536868022464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:46 addons-902832 kubelet[1485]: E1002 00:05:46.871133    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827546870881259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:46 addons-902832 kubelet[1485]: E1002 00:05:46.871173    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827546870881259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:56 addons-902832 kubelet[1485]: E1002 00:05:56.874368    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827556874130913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:05:56 addons-902832 kubelet[1485]: E1002 00:05:56.874414    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827556874130913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:06 addons-902832 kubelet[1485]: E1002 00:06:06.877284    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827566876994672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:06 addons-902832 kubelet[1485]: E1002 00:06:06.877329    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827566876994672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:16 addons-902832 kubelet[1485]: E1002 00:06:16.880123    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827576879892160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:16 addons-902832 kubelet[1485]: E1002 00:06:16.880166    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827576879892160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:26 addons-902832 kubelet[1485]: E1002 00:06:26.882380    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827586882134001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:26 addons-902832 kubelet[1485]: E1002 00:06:26.882422    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827586882134001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:36 addons-902832 kubelet[1485]: E1002 00:06:36.885485    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827596885248566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:36 addons-902832 kubelet[1485]: E1002 00:06:36.885527    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827596885248566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:38 addons-902832 kubelet[1485]: I1002 00:06:38.399130    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 00:06:46 addons-902832 kubelet[1485]: E1002 00:06:46.888357    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827606888102376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:46 addons-902832 kubelet[1485]: E1002 00:06:46.888389    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827606888102376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:56 addons-902832 kubelet[1485]: E1002 00:06:56.890807    1485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827616890571283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:06:56 addons-902832 kubelet[1485]: E1002 00:06:56.890843    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727827616890571283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595390,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3364809d715c943bf5cba98a2de1982916305c3e5460d68ea5c787d3a04bf1c3] <==
	I1001 23:49:46.513428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:49:46.527896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:49:46.528012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:49:46.536238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:49:46.536492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a!
	I1001 23:49:46.536616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fab6e1ea-fdd0-48bb-a53a-d4b2719a951f", APIVersion:"v1", ResourceVersion:"874", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a became leader
	I1001 23:49:46.636900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-902832_2a6589a3-258f-41de-a093-78aeb5af280a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-902832 -n addons-902832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-902832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (338.92s)

                                                
                                    

Test pass (296/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.45
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 6.46
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 221.27
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 16.94
35 TestAddons/parallel/InspektorGadget 11.85
38 TestAddons/parallel/CSI 58.26
39 TestAddons/parallel/Headlamp 21.89
40 TestAddons/parallel/CloudSpanner 5.63
41 TestAddons/parallel/LocalPath 52.84
42 TestAddons/parallel/NvidiaDevicePlugin 6.53
43 TestAddons/parallel/Yakd 12.34
44 TestAddons/StoppedEnableDisable 12.21
45 TestCertOptions 35.79
46 TestCertExpiration 247.52
48 TestForceSystemdFlag 36.51
49 TestForceSystemdEnv 41.75
55 TestErrorSpam/setup 32.13
56 TestErrorSpam/start 0.78
57 TestErrorSpam/status 1.11
58 TestErrorSpam/pause 1.76
59 TestErrorSpam/unpause 1.83
60 TestErrorSpam/stop 1.43
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 75.15
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.58
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
72 TestFunctional/serial/CacheCmd/cache/add_local 1.45
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 36.73
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.73
83 TestFunctional/serial/LogsFileCmd 1.73
84 TestFunctional/serial/InvalidService 4.39
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 9.97
88 TestFunctional/parallel/DryRun 0.53
89 TestFunctional/parallel/InternationalLanguage 0.22
90 TestFunctional/parallel/StatusCmd 1.01
94 TestFunctional/parallel/ServiceCmdConnect 11.68
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 26.19
98 TestFunctional/parallel/SSHCmd 0.74
99 TestFunctional/parallel/CpCmd 2.28
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.14
106 TestFunctional/parallel/NodeLabels 0.21
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
110 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/ProfileCmd/profile_list 0.42
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
126 TestFunctional/parallel/MountCmd/any-port 8.14
127 TestFunctional/parallel/ServiceCmd/List 0.5
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
130 TestFunctional/parallel/ServiceCmd/Format 0.4
131 TestFunctional/parallel/ServiceCmd/URL 0.51
132 TestFunctional/parallel/MountCmd/specific-port 2.16
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.5
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.13
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.34
141 TestFunctional/parallel/ImageCommands/Setup 0.82
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.26
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.62
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 173.09
159 TestMultiControlPlane/serial/DeployApp 9.13
160 TestMultiControlPlane/serial/PingHostFromPods 1.7
161 TestMultiControlPlane/serial/AddWorkerNode 62.64
162 TestMultiControlPlane/serial/NodeLabels 0.1
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
164 TestMultiControlPlane/serial/CopyFile 18.53
165 TestMultiControlPlane/serial/StopSecondaryNode 12.74
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
167 TestMultiControlPlane/serial/RestartSecondaryNode 49.76
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.88
170 TestMultiControlPlane/serial/DeleteSecondaryNode 11.65
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
172 TestMultiControlPlane/serial/StopCluster 35.91
173 TestMultiControlPlane/serial/RestartCluster 95.17
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
175 TestMultiControlPlane/serial/AddSecondaryNode 70.03
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
180 TestJSONOutput/start/Command 80.31
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.76
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.81
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 41.09
206 TestKicCustomNetwork/use_default_bridge_network 35.21
207 TestKicExistingNetwork 31.67
208 TestKicCustomSubnet 34.22
209 TestKicStaticIP 34.95
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 65.53
214 TestMountStart/serial/StartWithMountFirst 9.43
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 9.2
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 8.06
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 78.46
226 TestMultiNode/serial/DeployApp2Nodes 7.03
227 TestMultiNode/serial/PingHostFrom2Pods 0.98
228 TestMultiNode/serial/AddNode 60.05
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.67
231 TestMultiNode/serial/CopyFile 9.72
232 TestMultiNode/serial/StopNode 2.19
233 TestMultiNode/serial/StartAfterStop 10.94
234 TestMultiNode/serial/RestartKeepsNodes 81.43
235 TestMultiNode/serial/DeleteNode 5.19
236 TestMultiNode/serial/StopMultiNode 23.87
237 TestMultiNode/serial/RestartMultiNode 53.27
238 TestMultiNode/serial/ValidateNameConflict 34.09
243 TestPreload 128.02
245 TestScheduledStopUnix 108.37
248 TestInsufficientStorage 10.32
249 TestRunningBinaryUpgrade 69.84
251 TestKubernetesUpgrade 393.26
252 TestMissingContainerUpgrade 160
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 37.85
256 TestNoKubernetes/serial/StartWithStopK8s 9.46
257 TestNoKubernetes/serial/Start 9.53
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
259 TestNoKubernetes/serial/ProfileList 1.6
260 TestNoKubernetes/serial/Stop 1.27
261 TestNoKubernetes/serial/StartNoArgs 7.49
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
263 TestStoppedBinaryUpgrade/Setup 0.66
264 TestStoppedBinaryUpgrade/Upgrade 73.9
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
274 TestPause/serial/Start 76.7
275 TestPause/serial/SecondStartNoReconfiguration 39.87
276 TestPause/serial/Pause 1.04
277 TestPause/serial/VerifyStatus 0.42
278 TestPause/serial/Unpause 0.99
279 TestPause/serial/PauseAgain 1.1
280 TestPause/serial/DeletePaused 3.12
281 TestPause/serial/VerifyDeletedResources 0.6
289 TestNetworkPlugins/group/false 4.94
294 TestStartStop/group/old-k8s-version/serial/FirstStart 191.01
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.8
297 TestStartStop/group/old-k8s-version/serial/DeployApp 11.67
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.36
299 TestStartStop/group/old-k8s-version/serial/Stop 12.26
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
301 TestStartStop/group/old-k8s-version/serial/SecondStart 143.51
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.41
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.18
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/old-k8s-version/serial/Pause 2.92
312 TestStartStop/group/embed-certs/serial/FirstStart 77.45
313 TestStartStop/group/embed-certs/serial/DeployApp 11.35
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
315 TestStartStop/group/embed-certs/serial/Stop 12.05
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 288.39
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
323 TestStartStop/group/no-preload/serial/FirstStart 62.27
324 TestStartStop/group/no-preload/serial/DeployApp 10.35
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
326 TestStartStop/group/no-preload/serial/Stop 12.03
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/no-preload/serial/SecondStart 281.26
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.1
334 TestStartStop/group/newest-cni/serial/FirstStart 35.17
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
337 TestStartStop/group/newest-cni/serial/Stop 1.29
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/newest-cni/serial/SecondStart 14.96
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/newest-cni/serial/Pause 3.07
344 TestNetworkPlugins/group/calico/Start 57.48
345 TestNetworkPlugins/group/calico/ControllerPod 6.01
346 TestNetworkPlugins/group/calico/KubeletFlags 0.29
347 TestNetworkPlugins/group/calico/NetCatPod 10.3
348 TestNetworkPlugins/group/calico/DNS 0.19
349 TestNetworkPlugins/group/calico/Localhost 0.15
350 TestNetworkPlugins/group/calico/HairPin 0.16
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestNetworkPlugins/group/auto/Start 53.1
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
355 TestStartStop/group/no-preload/serial/Pause 4.33
356 TestNetworkPlugins/group/custom-flannel/Start 60.7
357 TestNetworkPlugins/group/auto/KubeletFlags 0.32
358 TestNetworkPlugins/group/auto/NetCatPod 12.35
359 TestNetworkPlugins/group/auto/DNS 0.17
360 TestNetworkPlugins/group/auto/Localhost 0.16
361 TestNetworkPlugins/group/auto/HairPin 0.15
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.36
364 TestNetworkPlugins/group/kindnet/Start 88.12
365 TestNetworkPlugins/group/custom-flannel/DNS 0.22
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
368 TestNetworkPlugins/group/flannel/Start 46.47
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
371 TestNetworkPlugins/group/flannel/NetCatPod 11.27
372 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/DNS 0.18
374 TestNetworkPlugins/group/flannel/Localhost 0.14
375 TestNetworkPlugins/group/flannel/HairPin 0.16
376 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
377 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
378 TestNetworkPlugins/group/kindnet/DNS 0.21
379 TestNetworkPlugins/group/kindnet/Localhost 0.18
380 TestNetworkPlugins/group/kindnet/HairPin 0.21
381 TestNetworkPlugins/group/enable-default-cni/Start 77.52
382 TestNetworkPlugins/group/bridge/Start 82.3
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
389 TestNetworkPlugins/group/bridge/NetCatPod 12.4
390 TestNetworkPlugins/group/bridge/DNS 0.19
391 TestNetworkPlugins/group/bridge/Localhost 0.16
392 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-732922 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-732922 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.98504296s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 23:47:59.906152 1468453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1001 23:47:59.906234 1468453 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-732922
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-732922: exit status 85 (68.294638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-732922 | jenkins | v1.34.0 | 01 Oct 24 23:47 UTC |          |
	|         | -p download-only-732922        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:47:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:47:51.967734 1468458 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:47:51.967921 1468458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:47:51.967950 1468458 out.go:358] Setting ErrFile to fd 2...
	I1001 23:47:51.967969 1468458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:47:51.968215 1468458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	W1001 23:47:51.968366 1468458 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19740-1463060/.minikube/config/config.json: open /home/jenkins/minikube-integration/19740-1463060/.minikube/config/config.json: no such file or directory
	I1001 23:47:51.968788 1468458 out.go:352] Setting JSON to true
	I1001 23:47:51.969713 1468458 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19812,"bootTime":1727806660,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1001 23:47:51.969811 1468458 start.go:139] virtualization:  
	I1001 23:47:51.972809 1468458 out.go:97] [download-only-732922] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1001 23:47:51.972999 1468458 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 23:47:51.973040 1468458 notify.go:220] Checking for updates...
	I1001 23:47:51.975301 1468458 out.go:169] MINIKUBE_LOCATION=19740
	I1001 23:47:51.976725 1468458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:47:51.978223 1468458 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:47:51.979774 1468458 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1001 23:47:51.981130 1468458 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 23:47:51.983239 1468458 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 23:47:51.983484 1468458 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:47:52.007437 1468458 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:47:52.007574 1468458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:47:52.076372 1468458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 23:47:52.066517554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:47:52.076493 1468458 docker.go:318] overlay module found
	I1001 23:47:52.077917 1468458 out.go:97] Using the docker driver based on user configuration
	I1001 23:47:52.077947 1468458 start.go:297] selected driver: docker
	I1001 23:47:52.077954 1468458 start.go:901] validating driver "docker" against <nil>
	I1001 23:47:52.078067 1468458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:47:52.127467 1468458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 23:47:52.117981399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:47:52.127682 1468458 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:47:52.127979 1468458 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 23:47:52.128137 1468458 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 23:47:52.129476 1468458 out.go:169] Using Docker driver with root privileges
	I1001 23:47:52.130508 1468458 cni.go:84] Creating CNI manager for ""
	I1001 23:47:52.130588 1468458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:47:52.130601 1468458 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:47:52.130675 1468458 start.go:340] cluster config:
	{Name:download-only-732922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-732922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:47:52.132006 1468458 out.go:97] Starting "download-only-732922" primary control-plane node in "download-only-732922" cluster
	I1001 23:47:52.132024 1468458 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 23:47:52.133390 1468458 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 23:47:52.133420 1468458 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 23:47:52.133578 1468458 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:47:52.147455 1468458 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:47:52.148051 1468458 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:47:52.148155 1468458 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:47:52.211578 1468458 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1001 23:47:52.211606 1468458 cache.go:56] Caching tarball of preloaded images
	I1001 23:47:52.211760 1468458 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 23:47:52.213177 1468458 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 23:47:52.213208 1468458 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1001 23:47:52.298827 1468458 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1001 23:47:56.473147 1468458 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	
	
	* The control-plane node download-only-732922 host does not exist
	  To start a cluster, run: "minikube start -p download-only-732922"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-732922
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-481946 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-481946 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.45716423s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 23:48:07.020975 1468453 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1001 23:48:07.021016 1468453 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-481946
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-481946: exit status 85 (66.452515ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-732922 | jenkins | v1.34.0 | 01 Oct 24 23:47 UTC |                     |
	|         | -p download-only-732922        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| delete  | -p download-only-732922        | download-only-732922 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:48 UTC |
	| start   | -o=json --download-only        | download-only-481946 | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC |                     |
	|         | -p download-only-481946        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:48:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:48:00.610660 1468656 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:48:00.610887 1468656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:00.610918 1468656 out.go:358] Setting ErrFile to fd 2...
	I1001 23:48:00.610937 1468656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:48:00.611234 1468656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1001 23:48:00.611670 1468656 out.go:352] Setting JSON to true
	I1001 23:48:00.612599 1468656 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19821,"bootTime":1727806660,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1001 23:48:00.612702 1468656 start.go:139] virtualization:  
	I1001 23:48:00.614561 1468656 out.go:97] [download-only-481946] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:48:00.614775 1468656 notify.go:220] Checking for updates...
	I1001 23:48:00.616339 1468656 out.go:169] MINIKUBE_LOCATION=19740
	I1001 23:48:00.617999 1468656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:48:00.619434 1468656 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1001 23:48:00.620556 1468656 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1001 23:48:00.621802 1468656 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 23:48:00.624340 1468656 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 23:48:00.624594 1468656 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:48:00.646313 1468656 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:48:00.646438 1468656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:00.703498 1468656 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 23:48:00.693850782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:00.703613 1468656 docker.go:318] overlay module found
	I1001 23:48:00.707061 1468656 out.go:97] Using the docker driver based on user configuration
	I1001 23:48:00.707084 1468656 start.go:297] selected driver: docker
	I1001 23:48:00.707091 1468656 start.go:901] validating driver "docker" against <nil>
	I1001 23:48:00.707220 1468656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:48:00.761284 1468656 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 23:48:00.7520277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:48:00.761492 1468656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:48:00.761787 1468656 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 23:48:00.761959 1468656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 23:48:00.763753 1468656 out.go:169] Using Docker driver with root privileges
	I1001 23:48:00.765033 1468656 cni.go:84] Creating CNI manager for ""
	I1001 23:48:00.765094 1468656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 23:48:00.765107 1468656 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:48:00.765188 1468656 start.go:340] cluster config:
	{Name:download-only-481946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-481946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:48:00.766994 1468656 out.go:97] Starting "download-only-481946" primary control-plane node in "download-only-481946" cluster
	I1001 23:48:00.767012 1468656 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 23:48:00.768310 1468656 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 23:48:00.768332 1468656 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:00.768485 1468656 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:48:00.786264 1468656 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:48:00.786401 1468656 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:48:00.786425 1468656 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 23:48:00.786435 1468656 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 23:48:00.786443 1468656 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 23:48:00.823657 1468656 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1001 23:48:00.823688 1468656 cache.go:56] Caching tarball of preloaded images
	I1001 23:48:00.823853 1468656 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:48:00.825754 1468656 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 23:48:00.825773 1468656 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1001 23:48:00.895412 1468656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1001 23:48:05.365800 1468656 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1001 23:48:05.365905 1468656 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-1463060/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-481946 host does not exist
	  To start a cluster, run: "minikube start -p download-only-481946"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-481946
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 23:48:08.252756 1468453 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-904477 --alsologtostderr --binary-mirror http://127.0.0.1:33775 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-904477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-904477
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-902832
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-902832: exit status 85 (71.007239ms)

                                                
                                                
-- stdout --
	* Profile "addons-902832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-902832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-902832
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-902832: exit status 85 (69.794984ms)

                                                
                                                
-- stdout --
	* Profile "addons-902832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-902832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-902832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-902832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m41.271334351s)
--- PASS: TestAddons/Setup (221.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-902832 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-902832 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.977241ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-wt4tb" [89b4caf4-80a6-4169-98c5-1a6ccdd606c0] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003514554s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8h2cr" [de013b46-27a0-473a-9c80-20d0ffeaaa75] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004211089s
addons_test.go:331: (dbg) Run:  kubectl --context addons-902832 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-902832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-902832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.828772119s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 ip
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wbjds" [1daeb9ea-fa3b-4336-a154-3d91d7d44efd] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003997277s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable inspektor-gadget --alsologtostderr -v=1: (5.84375688s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 00:00:37.728997 1468453 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 00:00:37.736572 1468453 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 00:00:37.736605 1468453 kapi.go:107] duration metric: took 7.621872ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.631972ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-902832 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-902832 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [73291066-ef21-4c6a-86d0-d6c045b21cfc] Pending
helpers_test.go:344: "task-pv-pod" [73291066-ef21-4c6a-86d0-d6c045b21cfc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [73291066-ef21-4c6a-86d0-d6c045b21cfc] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.014488835s
addons_test.go:511: (dbg) Run:  kubectl --context addons-902832 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-902832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-902832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-902832 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-902832 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-902832 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-902832 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d8c3c551-9615-42f8-b062-57bdb94b6d8b] Pending
helpers_test.go:344: "task-pv-pod-restore" [d8c3c551-9615-42f8-b062-57bdb94b6d8b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d8c3c551-9615-42f8-b062-57bdb94b6d8b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003862447s
addons_test.go:553: (dbg) Run:  kubectl --context addons-902832 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-902832 delete pod task-pv-pod-restore: (1.14269226s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-902832 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-902832 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.752253217s)
--- PASS: TestAddons/parallel/CSI (58.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-902832 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-hrgd9" [efef113f-c07e-444b-9d07-b690b08d1404] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-hrgd9" [efef113f-c07e-444b-9d07-b690b08d1404] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-hrgd9" [efef113f-c07e-444b-9d07-b690b08d1404] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003740132s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable headlamp --alsologtostderr -v=1
2024/10/02 00:00:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable headlamp --alsologtostderr -v=1: (5.924153225s)
--- PASS: TestAddons/parallel/Headlamp (21.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-jsmtz" [57dd61bc-85c6-4b21-b5a7-128955cf14e6] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009167604s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-902832 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-902832 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7fd4cb23-29ff-4e35-9c0f-1b3d4f931540] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7fd4cb23-29ff-4e35-9c0f-1b3d4f931540] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7fd4cb23-29ff-4e35-9c0f-1b3d4f931540] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004551697s
addons_test.go:899: (dbg) Run:  kubectl --context addons-902832 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 ssh "cat /opt/local-path-provisioner/pvc-cf99ba77-1628-40e8-9e38-1970b272e06c_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-902832 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-902832 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.650235654s)
--- PASS: TestAddons/parallel/LocalPath (52.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zz9mg" [18ac45a3-6b0c-4535-a78d-cc801c2d3d20] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003843543s
addons_test.go:959: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-902832
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sm42p" [0ca78a40-d615-4654-9edc-f46373bdc369] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003607474s
addons_test.go:971: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-arm64 -p addons-902832 addons disable yakd --alsologtostderr -v=1: (6.332296214s)
--- PASS: TestAddons/parallel/Yakd (12.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-902832
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-902832: (11.932661584s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-902832
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-902832
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-902832
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (35.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-525402 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-525402 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.148576201s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-525402 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-525402 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-525402 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-525402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-525402
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-525402: (1.990394896s)
--- PASS: TestCertOptions (35.79s)

                                                
                                    
x
+
TestCertExpiration (247.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-238916 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-238916 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.598698613s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-238916 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-238916 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (23.882934061s)
helpers_test.go:175: Cleaning up "cert-expiration-238916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-238916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-238916: (2.041496493s)
--- PASS: TestCertExpiration (247.52s)

                                                
                                    
x
+
TestForceSystemdFlag (36.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-341519 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 00:48:49.001117 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-341519 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.409156462s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-341519 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-341519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-341519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-341519: (2.724734566s)
--- PASS: TestForceSystemdFlag (36.51s)

                                                
                                    
x
+
TestForceSystemdEnv (41.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-707494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-707494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.922058118s)
helpers_test.go:175: Cleaning up "force-systemd-env-707494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-707494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-707494: (2.830516899s)
--- PASS: TestForceSystemdEnv (41.75s)

                                                
                                    
x
+
TestErrorSpam/setup (32.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-367794 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-367794 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-367794 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-367794 --driver=docker  --container-runtime=crio: (32.126708118s)
--- PASS: TestErrorSpam/setup (32.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 stop: (1.231648861s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-367794 --log_dir /tmp/nospam-367794 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19740-1463060/.minikube/files/etc/test/nested/copy/1468453/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-744852 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.149627725s)
--- PASS: TestFunctional/serial/StartWithProxy (75.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 00:09:21.271613 1468453 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-744852 --alsologtostderr -v=8: (29.583644114s)
functional_test.go:663: soft start took 29.584233343s for "functional-744852" cluster.
I1002 00:09:50.855581 1468453 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-744852 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:3.1: (1.408884858s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:3.3: (1.505606315s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 cache add registry.k8s.io/pause:latest: (1.267491599s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-744852 /tmp/TestFunctionalserialCacheCmdcacheadd_local2302945312/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache add minikube-local-cache-test:functional-744852
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache delete minikube-local-cache-test:functional-744852
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-744852
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.343631ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 cache reload: (1.007251635s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 kubectl -- --context functional-744852 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-744852 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-744852 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.725417028s)
functional_test.go:761: restart took 36.725526432s for "functional-744852" cluster.
I1002 00:10:36.064136 1468453 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-744852 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 logs: (1.731389211s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 logs --file /tmp/TestFunctionalserialLogsFileCmd973910767/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 logs --file /tmp/TestFunctionalserialLogsFileCmd973910767/001/logs.txt: (1.731295076s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-744852 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-744852
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-744852: exit status 115 (655.966898ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30371 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-744852 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 config get cpus: exit status 14 (61.299164ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 config get cpus: exit status 14 (73.746104ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-744852 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-744852 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1502006: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-744852 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.349169ms)

                                                
                                                
-- stdout --
	* [functional-744852] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:11:20.273658 1501440 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:11:20.273885 1501440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:11:20.273913 1501440 out.go:358] Setting ErrFile to fd 2...
	I1002 00:11:20.273934 1501440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:11:20.274218 1501440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:11:20.274627 1501440 out.go:352] Setting JSON to false
	I1002 00:11:20.275603 1501440 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21221,"bootTime":1727806660,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 00:11:20.275713 1501440 start.go:139] virtualization:  
	I1002 00:11:20.278720 1501440 out.go:177] * [functional-744852] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1002 00:11:20.280375 1501440 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:11:20.280430 1501440 notify.go:220] Checking for updates...
	I1002 00:11:20.283171 1501440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:11:20.284818 1501440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1002 00:11:20.286637 1501440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1002 00:11:20.289152 1501440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:11:20.290611 1501440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:11:20.293128 1501440 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:11:20.293727 1501440 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:11:20.319210 1501440 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:11:20.319329 1501440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:11:20.382694 1501440 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-02 00:11:20.373235454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:11:20.382822 1501440 docker.go:318] overlay module found
	I1002 00:11:20.384512 1501440 out.go:177] * Using the docker driver based on existing profile
	I1002 00:11:20.385776 1501440 start.go:297] selected driver: docker
	I1002 00:11:20.385796 1501440 start.go:901] validating driver "docker" against &{Name:functional-744852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-744852 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:11:20.385913 1501440 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:11:20.387900 1501440 out.go:201] 
	W1002 00:11:20.389367 1501440 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 00:11:20.390688 1501440 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-744852 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-744852 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (218.681772ms)

                                                
                                                
-- stdout --
	* [functional-744852] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:11:20.049396 1501394 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:11:20.049662 1501394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:11:20.049692 1501394 out.go:358] Setting ErrFile to fd 2...
	I1002 00:11:20.049716 1501394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:11:20.050177 1501394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:11:20.050677 1501394 out.go:352] Setting JSON to false
	I1002 00:11:20.051825 1501394 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21220,"bootTime":1727806660,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 00:11:20.051940 1501394 start.go:139] virtualization:  
	I1002 00:11:20.055649 1501394 out.go:177] * [functional-744852] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1002 00:11:20.059339 1501394 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:11:20.059405 1501394 notify.go:220] Checking for updates...
	I1002 00:11:20.062393 1501394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:11:20.065063 1501394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1002 00:11:20.067871 1501394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1002 00:11:20.070789 1501394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:11:20.073787 1501394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:11:20.077039 1501394 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:11:20.077697 1501394 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:11:20.112328 1501394 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:11:20.112474 1501394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:11:20.166869 1501394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-02 00:11:20.154786456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:11:20.166998 1501394 docker.go:318] overlay module found
	I1002 00:11:20.175477 1501394 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1002 00:11:20.178189 1501394 start.go:297] selected driver: docker
	I1002 00:11:20.178261 1501394 start.go:901] validating driver "docker" against &{Name:functional-744852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-744852 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:11:20.178414 1501394 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:11:20.182555 1501394 out.go:201] 
	W1002 00:11:20.185972 1501394 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 00:11:20.189580 1501394 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-744852 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-744852 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-m9gjx" [daf47fd6-705d-4a23-a42e-5a294ba214a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-m9gjx" [daf47fd6-705d-4a23-a42e-5a294ba214a1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004316112s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31377
functional_test.go:1675: http://192.168.49.2:31377: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-m9gjx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31377
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fa37a23f-0efa-4a16-951c-0266bbbf1e43] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004238391s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-744852 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-744852 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-744852 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-744852 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [043d9204-7087-403f-bdab-7b532e856244] Pending
helpers_test.go:344: "sp-pod" [043d9204-7087-403f-bdab-7b532e856244] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [043d9204-7087-403f-bdab-7b532e856244] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00345596s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-744852 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-744852 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-744852 delete -f testdata/storage-provisioner/pod.yaml: (1.201022199s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-744852 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1e71eca-1897-4e4c-b79e-4f45f9d7a67c] Pending
helpers_test.go:344: "sp-pod" [a1e71eca-1897-4e4c-b79e-4f45f9d7a67c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004611427s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-744852 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh -n functional-744852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cp functional-744852:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2533487645/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh -n functional-744852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh -n functional-744852 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1468453/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /etc/test/nested/copy/1468453/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1468453.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /etc/ssl/certs/1468453.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1468453.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /usr/share/ca-certificates/1468453.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14684532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /etc/ssl/certs/14684532.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14684532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /usr/share/ca-certificates/14684532.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-744852 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "sudo systemctl is-active docker": exit status 1 (357.407176ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "sudo systemctl is-active containerd": exit status 1 (404.808834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1499224: os: process already finished
helpers_test.go:508: unable to kill pid 1499054: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-744852 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f47e169e-a5a1-4139-85c4-277e5cd350d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f47e169e-a5a1-4139-85c4-277e5cd350d7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004802598s
I1002 00:10:56.385564 1468453 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-744852 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.229.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-744852 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-744852 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-744852 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-lzbmh" [5b71534f-1c93-4aa5-8a31-b5296e09cbbb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-lzbmh" [5b71534f-1c93-4aa5-8a31-b5296e09cbbb] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003028632s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "364.462957ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "58.180552ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "357.038486ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.062646ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdany-port822977643/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727827873766671387" to /tmp/TestFunctionalparallelMountCmdany-port822977643/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727827873766671387" to /tmp/TestFunctionalparallelMountCmdany-port822977643/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727827873766671387" to /tmp/TestFunctionalparallelMountCmdany-port822977643/001/test-1727827873766671387
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.925365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 00:11:14.090845 1468453 retry.go:31] will retry after 477.716642ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 00:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 00:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 00:11 test-1727827873766671387
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh cat /mount-9p/test-1727827873766671387
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-744852 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2919092d-08c2-4895-9c38-6b116f3bf14b] Pending
helpers_test.go:344: "busybox-mount" [2919092d-08c2-4895-9c38-6b116f3bf14b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2919092d-08c2-4895-9c38-6b116f3bf14b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2919092d-08c2-4895-9c38-6b116f3bf14b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007159144s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-744852 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdany-port822977643/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service list -o json
functional_test.go:1494: Took "497.68048ms" to run "out/minikube-linux-arm64 -p functional-744852 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30704
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30704
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdspecific-port117699816/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (526.092545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 00:11:22.431875 1468453 retry.go:31] will retry after 610.447632ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdspecific-port117699816/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "sudo umount -f /mount-9p": exit status 1 (281.339428ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-744852 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdspecific-port117699816/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T" /mount1: exit status 1 (672.168829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 00:11:24.738177 1468453 retry.go:31] will retry after 686.546246ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-744852 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-744852 /tmp/TestFunctionalparallelMountCmdVerifyCleanup116445088/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 version -o=json --components: (1.129530068s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-744852 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-744852
localhost/kicbase/echo-server:functional-744852
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-744852 image ls --format short --alsologtostderr:
I1002 00:11:36.095691 1504242 out.go:345] Setting OutFile to fd 1 ...
I1002 00:11:36.095816 1504242 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.095858 1504242 out.go:358] Setting ErrFile to fd 2...
I1002 00:11:36.095869 1504242 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.096135 1504242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
I1002 00:11:36.096877 1504242 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.097017 1504242 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.097617 1504242 cli_runner.go:164] Run: docker container inspect functional-744852 --format={{.State.Status}}
I1002 00:11:36.119542 1504242 ssh_runner.go:195] Run: systemctl --version
I1002 00:11:36.119607 1504242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744852
I1002 00:11:36.143450 1504242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34304 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/functional-744852/id_rsa Username:docker}
I1002 00:11:36.245411 1504242 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-744852 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| docker.io/library/nginx                 | latest             | 6e8672ddd037e | 197MB  |
| localhost/kicbase/echo-server           | functional-744852  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/minikube-local-cache-test     | functional-744852  | de8aadac908ab | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-744852 image ls --format table --alsologtostderr:
I1002 00:11:36.398219 1504313 out.go:345] Setting OutFile to fd 1 ...
I1002 00:11:36.399497 1504313 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.399641 1504313 out.go:358] Setting ErrFile to fd 2...
I1002 00:11:36.399671 1504313 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.402223 1504313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
I1002 00:11:36.403020 1504313 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.403216 1504313 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.403765 1504313 cli_runner.go:164] Run: docker container inspect functional-744852 --format={{.State.Status}}
I1002 00:11:36.430038 1504313 ssh_runner.go:195] Run: systemctl --version
I1002 00:11:36.430107 1504313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744852
I1002 00:11:36.460294 1504313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34304 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/functional-744852/id_rsa Username:docker}
I1002 00:11:36.555990 1504313 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-744852 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75a
ff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","regis
try.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-744852"],"size":"4788229"},{"id":"de8aadac908aba1569bf16bc3080a5aeaff909bd57840f655300791adf0526bf","repoDigests":["localhost/minikube-local-cache-test@sha256:e7262b35a978e23d586764400b96303729d3f3dff695b08b428e06aca35d84b0"],"repoTags":["localhost/minikube-local-cache-test:functional-744852"],"size":"3330"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b2
6f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f51
2f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags"
:["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:1b1f09a6239162ae97b9d262db13572367bd4fa2c9d27adb75aface0223b9c09","docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172541"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e
399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-744852 image ls --format json --alsologtostderr:
I1002 00:11:36.380116 1504307 out.go:345] Setting OutFile to fd 1 ...
I1002 00:11:36.380281 1504307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.380291 1504307 out.go:358] Setting ErrFile to fd 2...
I1002 00:11:36.380297 1504307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.380526 1504307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
I1002 00:11:36.381150 1504307 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.381266 1504307 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.381859 1504307 cli_runner.go:164] Run: docker container inspect functional-744852 --format={{.State.Status}}
I1002 00:11:36.404092 1504307 ssh_runner.go:195] Run: systemctl --version
I1002 00:11:36.404141 1504307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744852
I1002 00:11:36.434947 1504307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34304 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/functional-744852/id_rsa Username:docker}
I1002 00:11:36.531931 1504307 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-744852 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-744852
size: "4788229"
- id: de8aadac908aba1569bf16bc3080a5aeaff909bd57840f655300791adf0526bf
repoDigests:
- localhost/minikube-local-cache-test@sha256:e7262b35a978e23d586764400b96303729d3f3dff695b08b428e06aca35d84b0
repoTags:
- localhost/minikube-local-cache-test:functional-744852
size: "3330"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:1b1f09a6239162ae97b9d262db13572367bd4fa2c9d27adb75aface0223b9c09
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "197172541"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-744852 image ls --format yaml --alsologtostderr:
I1002 00:11:36.092816 1504243 out.go:345] Setting OutFile to fd 1 ...
I1002 00:11:36.093054 1504243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.093084 1504243 out.go:358] Setting ErrFile to fd 2...
I1002 00:11:36.093106 1504243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.093407 1504243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
I1002 00:11:36.094134 1504243 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.094324 1504243 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.094922 1504243 cli_runner.go:164] Run: docker container inspect functional-744852 --format={{.State.Status}}
I1002 00:11:36.113399 1504243 ssh_runner.go:195] Run: systemctl --version
I1002 00:11:36.113454 1504243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744852
I1002 00:11:36.138695 1504243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34304 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/functional-744852/id_rsa Username:docker}
I1002 00:11:36.235655 1504243 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-744852 ssh pgrep buildkitd: exit status 1 (273.47345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image build -t localhost/my-image:functional-744852 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 image build -t localhost/my-image:functional-744852 testdata/build --alsologtostderr: (2.830832315s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-744852 image build -t localhost/my-image:functional-744852 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d13a62528e5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-744852
--> 0ca02b75b87
Successfully tagged localhost/my-image:functional-744852
0ca02b75b8701557d1176c470ce2c9c5f2fde6d3301606da7ef5ecd3cd0694fc
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-744852 image build -t localhost/my-image:functional-744852 testdata/build --alsologtostderr:
I1002 00:11:36.919475 1504428 out.go:345] Setting OutFile to fd 1 ...
I1002 00:11:36.920115 1504428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.920133 1504428 out.go:358] Setting ErrFile to fd 2...
I1002 00:11:36.920139 1504428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1002 00:11:36.920408 1504428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
I1002 00:11:36.921103 1504428 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.921720 1504428 config.go:182] Loaded profile config "functional-744852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1002 00:11:36.922360 1504428 cli_runner.go:164] Run: docker container inspect functional-744852 --format={{.State.Status}}
I1002 00:11:36.939052 1504428 ssh_runner.go:195] Run: systemctl --version
I1002 00:11:36.939114 1504428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744852
I1002 00:11:36.956184 1504428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34304 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/functional-744852/id_rsa Username:docker}
I1002 00:11:37.048392 1504428 build_images.go:161] Building image from path: /tmp/build.4222062921.tar
I1002 00:11:37.048470 1504428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 00:11:37.057645 1504428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4222062921.tar
I1002 00:11:37.061680 1504428 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4222062921.tar: stat -c "%s %y" /var/lib/minikube/build/build.4222062921.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4222062921.tar': No such file or directory
I1002 00:11:37.061733 1504428 ssh_runner.go:362] scp /tmp/build.4222062921.tar --> /var/lib/minikube/build/build.4222062921.tar (3072 bytes)
I1002 00:11:37.093747 1504428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4222062921
I1002 00:11:37.103721 1504428 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4222062921 -xf /var/lib/minikube/build/build.4222062921.tar
I1002 00:11:37.113418 1504428 crio.go:315] Building image: /var/lib/minikube/build/build.4222062921
I1002 00:11:37.113528 1504428 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-744852 /var/lib/minikube/build/build.4222062921 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 00:11:39.672602 1504428 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-744852 /var/lib/minikube/build/build.4222062921 --cgroup-manager=cgroupfs: (2.559030981s)
I1002 00:11:39.672718 1504428 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4222062921
I1002 00:11:39.682139 1504428 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4222062921.tar
I1002 00:11:39.690706 1504428 build_images.go:217] Built localhost/my-image:functional-744852 from /tmp/build.4222062921.tar
I1002 00:11:39.690739 1504428 build_images.go:133] succeeded building to: functional-744852
I1002 00:11:39.690744 1504428 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-744852
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image load --daemon kicbase/echo-server:functional-744852 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 image load --daemon kicbase/echo-server:functional-744852 --alsologtostderr: (1.462098197s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
2024/10/02 00:11:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image load --daemon kicbase/echo-server:functional-744852 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-744852 image load --daemon kicbase/echo-server:functional-744852 --alsologtostderr: (1.004460675s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-744852
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image load --daemon kicbase/echo-server:functional-744852 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image save kicbase/echo-server:functional-744852 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image rm kicbase/echo-server:functional-744852 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-744852
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-744852 image save --daemon kicbase/echo-server:functional-744852 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-744852
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-744852
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-744852
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-744852
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (173.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-878052 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 00:11:51.063029 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.069399 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.080845 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.102323 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.143811 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.226835 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.389148 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:51.710873 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:52.352597 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:53.633906 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:56.195779 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:01.318060 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:11.559320 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:32.041361 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:13.012646 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:34.934697 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-878052 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m52.291609832s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (173.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-878052 -- rollout status deployment/busybox: (6.125511781s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-9j76q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-jf4wm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-wgm7q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-9j76q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-jf4wm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-wgm7q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-9j76q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-jf4wm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-wgm7q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-9j76q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-9j76q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-jf4wm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-jf4wm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-wgm7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-878052 -- exec busybox-7dff88458-wgm7q -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-878052 -v=7 --alsologtostderr
E1002 00:15:45.927448 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:45.934048 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:45.945405 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:45.966812 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:46.008451 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:46.090036 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:46.252371 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:46.573680 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:47.216467 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-878052 -v=7 --alsologtostderr: (1m1.653365621s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
E1002 00:15:48.499844 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-878052 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.028684411s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status --output json -v=7 --alsologtostderr
E1002 00:15:51.065532 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp testdata/cp-test.txt ha-878052:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3389343479/001/cp-test_ha-878052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052:/home/docker/cp-test.txt ha-878052-m02:/home/docker/cp-test_ha-878052_ha-878052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test_ha-878052_ha-878052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052:/home/docker/cp-test.txt ha-878052-m03:/home/docker/cp-test_ha-878052_ha-878052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test_ha-878052_ha-878052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052:/home/docker/cp-test.txt ha-878052-m04:/home/docker/cp-test_ha-878052_ha-878052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test_ha-878052_ha-878052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp testdata/cp-test.txt ha-878052-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test.txt"
E1002 00:15:56.188099 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3389343479/001/cp-test_ha-878052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m02:/home/docker/cp-test.txt ha-878052:/home/docker/cp-test_ha-878052-m02_ha-878052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test_ha-878052-m02_ha-878052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m02:/home/docker/cp-test.txt ha-878052-m03:/home/docker/cp-test_ha-878052-m02_ha-878052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test_ha-878052-m02_ha-878052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m02:/home/docker/cp-test.txt ha-878052-m04:/home/docker/cp-test_ha-878052-m02_ha-878052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test_ha-878052-m02_ha-878052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp testdata/cp-test.txt ha-878052-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3389343479/001/cp-test_ha-878052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m03:/home/docker/cp-test.txt ha-878052:/home/docker/cp-test_ha-878052-m03_ha-878052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test_ha-878052-m03_ha-878052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m03:/home/docker/cp-test.txt ha-878052-m02:/home/docker/cp-test_ha-878052-m03_ha-878052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test_ha-878052-m03_ha-878052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m03:/home/docker/cp-test.txt ha-878052-m04:/home/docker/cp-test_ha-878052-m03_ha-878052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test_ha-878052-m03_ha-878052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp testdata/cp-test.txt ha-878052-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3389343479/001/cp-test_ha-878052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m04:/home/docker/cp-test.txt ha-878052:/home/docker/cp-test_ha-878052-m04_ha-878052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test.txt"
E1002 00:16:06.429402 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052 "sudo cat /home/docker/cp-test_ha-878052-m04_ha-878052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m04:/home/docker/cp-test.txt ha-878052-m02:/home/docker/cp-test_ha-878052-m04_ha-878052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m02 "sudo cat /home/docker/cp-test_ha-878052-m04_ha-878052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 cp ha-878052-m04:/home/docker/cp-test.txt ha-878052-m03:/home/docker/cp-test_ha-878052-m04_ha-878052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 ssh -n ha-878052-m03 "sudo cat /home/docker/cp-test_ha-878052-m04_ha-878052-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-878052 node stop m02 -v=7 --alsologtostderr: (12.01250426s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr: exit status 7 (727.70789ms)

                                                
                                                
-- stdout --
	ha-878052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-878052-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-878052-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-878052-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:16:21.064180 1520147 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:16:21.064378 1520147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:16:21.064393 1520147 out.go:358] Setting ErrFile to fd 2...
	I1002 00:16:21.064400 1520147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:16:21.064728 1520147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:16:21.065030 1520147 out.go:352] Setting JSON to false
	I1002 00:16:21.065132 1520147 mustload.go:65] Loading cluster: ha-878052
	I1002 00:16:21.065232 1520147 notify.go:220] Checking for updates...
	I1002 00:16:21.065619 1520147 config.go:182] Loaded profile config "ha-878052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:16:21.065645 1520147 status.go:174] checking status of ha-878052 ...
	I1002 00:16:21.066354 1520147 cli_runner.go:164] Run: docker container inspect ha-878052 --format={{.State.Status}}
	I1002 00:16:21.086398 1520147 status.go:371] ha-878052 host status = "Running" (err=<nil>)
	I1002 00:16:21.086419 1520147 host.go:66] Checking if "ha-878052" exists ...
	I1002 00:16:21.086835 1520147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-878052
	I1002 00:16:21.107459 1520147 host.go:66] Checking if "ha-878052" exists ...
	I1002 00:16:21.107771 1520147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:16:21.107815 1520147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-878052
	I1002 00:16:21.125725 1520147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/ha-878052/id_rsa Username:docker}
	I1002 00:16:21.224670 1520147 ssh_runner.go:195] Run: systemctl --version
	I1002 00:16:21.228852 1520147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:16:21.239758 1520147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:16:21.310228 1520147 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-02 00:16:21.299621472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:16:21.310811 1520147 kubeconfig.go:125] found "ha-878052" server: "https://192.168.49.254:8443"
	I1002 00:16:21.310844 1520147 api_server.go:166] Checking apiserver status ...
	I1002 00:16:21.310889 1520147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:16:21.321974 1520147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	I1002 00:16:21.331687 1520147 api_server.go:182] apiserver freezer: "4:freezer:/docker/344b0bcec95a8af8dfc051c60b4766ac55dea36ddbfb0bcab72c685ea856ceb1/crio/crio-8fe47ffdcac3423a0e66c554efb45919c8cfff2bf5f6c3de48ec4acd9b52d201"
	I1002 00:16:21.331762 1520147 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/344b0bcec95a8af8dfc051c60b4766ac55dea36ddbfb0bcab72c685ea856ceb1/crio/crio-8fe47ffdcac3423a0e66c554efb45919c8cfff2bf5f6c3de48ec4acd9b52d201/freezer.state
	I1002 00:16:21.341557 1520147 api_server.go:204] freezer state: "THAWED"
	I1002 00:16:21.341587 1520147 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 00:16:21.350811 1520147 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 00:16:21.350891 1520147 status.go:463] ha-878052 apiserver status = Running (err=<nil>)
	I1002 00:16:21.350920 1520147 status.go:176] ha-878052 status: &{Name:ha-878052 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:16:21.350961 1520147 status.go:174] checking status of ha-878052-m02 ...
	I1002 00:16:21.351362 1520147 cli_runner.go:164] Run: docker container inspect ha-878052-m02 --format={{.State.Status}}
	I1002 00:16:21.368468 1520147 status.go:371] ha-878052-m02 host status = "Stopped" (err=<nil>)
	I1002 00:16:21.368492 1520147 status.go:384] host is not running, skipping remaining checks
	I1002 00:16:21.368499 1520147 status.go:176] ha-878052-m02 status: &{Name:ha-878052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:16:21.368520 1520147 status.go:174] checking status of ha-878052-m03 ...
	I1002 00:16:21.368837 1520147 cli_runner.go:164] Run: docker container inspect ha-878052-m03 --format={{.State.Status}}
	I1002 00:16:21.385035 1520147 status.go:371] ha-878052-m03 host status = "Running" (err=<nil>)
	I1002 00:16:21.385070 1520147 host.go:66] Checking if "ha-878052-m03" exists ...
	I1002 00:16:21.385382 1520147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-878052-m03
	I1002 00:16:21.400961 1520147 host.go:66] Checking if "ha-878052-m03" exists ...
	I1002 00:16:21.401363 1520147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:16:21.401413 1520147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-878052-m03
	I1002 00:16:21.418208 1520147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34319 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/ha-878052-m03/id_rsa Username:docker}
	I1002 00:16:21.512435 1520147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:16:21.527004 1520147 kubeconfig.go:125] found "ha-878052" server: "https://192.168.49.254:8443"
	I1002 00:16:21.527035 1520147 api_server.go:166] Checking apiserver status ...
	I1002 00:16:21.527084 1520147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:16:21.539028 1520147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1329/cgroup
	I1002 00:16:21.550431 1520147 api_server.go:182] apiserver freezer: "4:freezer:/docker/6a4f4e2be76148399e6c11c43e1a0c98c66ef26200783958c9d834d2879535b9/crio/crio-a5e9d1fa2d0dcf9614a75b7b52361228fdc4ba9238cbb50092a1732a269f388f"
	I1002 00:16:21.550594 1520147 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a4f4e2be76148399e6c11c43e1a0c98c66ef26200783958c9d834d2879535b9/crio/crio-a5e9d1fa2d0dcf9614a75b7b52361228fdc4ba9238cbb50092a1732a269f388f/freezer.state
	I1002 00:16:21.560756 1520147 api_server.go:204] freezer state: "THAWED"
	I1002 00:16:21.560794 1520147 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 00:16:21.568591 1520147 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 00:16:21.568617 1520147 status.go:463] ha-878052-m03 apiserver status = Running (err=<nil>)
	I1002 00:16:21.568626 1520147 status.go:176] ha-878052-m03 status: &{Name:ha-878052-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:16:21.568650 1520147 status.go:174] checking status of ha-878052-m04 ...
	I1002 00:16:21.569008 1520147 cli_runner.go:164] Run: docker container inspect ha-878052-m04 --format={{.State.Status}}
	I1002 00:16:21.587467 1520147 status.go:371] ha-878052-m04 host status = "Running" (err=<nil>)
	I1002 00:16:21.587496 1520147 host.go:66] Checking if "ha-878052-m04" exists ...
	I1002 00:16:21.587807 1520147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-878052-m04
	I1002 00:16:21.605573 1520147 host.go:66] Checking if "ha-878052-m04" exists ...
	I1002 00:16:21.605898 1520147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:16:21.605950 1520147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-878052-m04
	I1002 00:16:21.633173 1520147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/ha-878052-m04/id_rsa Username:docker}
	I1002 00:16:21.728237 1520147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:16:21.739831 1520147 status.go:176] ha-878052-m04 status: &{Name:ha-878052-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 node start m02 -v=7 --alsologtostderr
E1002 00:16:26.911434 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:16:51.063295 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:07.872974 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-878052 node start m02 -v=7 --alsologtostderr: (48.730972936s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.005830545s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-878052 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-878052 -v=7 --alsologtostderr
E1002 00:17:18.776653 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-878052 -v=7 --alsologtostderr: (36.931762333s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-878052 --wait=true -v=7 --alsologtostderr
E1002 00:18:29.794904 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:20:45.927395 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:21:13.636920 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-878052 --wait=true -v=7 --alsologtostderr: (3m24.808792925s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-878052
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-878052 node delete m03 -v=7 --alsologtostderr: (10.672279532s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 stop -v=7 --alsologtostderr
E1002 00:21:51.063484 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-878052 stop -v=7 --alsologtostderr: (35.794144502s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr: exit status 7 (114.891795ms)

                                                
                                                
-- stdout --
	ha-878052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-878052-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-878052-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:22:03.377895 1535243 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:22:03.378102 1535243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:22:03.378132 1535243 out.go:358] Setting ErrFile to fd 2...
	I1002 00:22:03.378152 1535243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:22:03.378418 1535243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:22:03.378637 1535243 out.go:352] Setting JSON to false
	I1002 00:22:03.378699 1535243 mustload.go:65] Loading cluster: ha-878052
	I1002 00:22:03.378780 1535243 notify.go:220] Checking for updates...
	I1002 00:22:03.379202 1535243 config.go:182] Loaded profile config "ha-878052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:03.379249 1535243 status.go:174] checking status of ha-878052 ...
	I1002 00:22:03.380109 1535243 cli_runner.go:164] Run: docker container inspect ha-878052 --format={{.State.Status}}
	I1002 00:22:03.397865 1535243 status.go:371] ha-878052 host status = "Stopped" (err=<nil>)
	I1002 00:22:03.397887 1535243 status.go:384] host is not running, skipping remaining checks
	I1002 00:22:03.397895 1535243 status.go:176] ha-878052 status: &{Name:ha-878052 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:22:03.397931 1535243 status.go:174] checking status of ha-878052-m02 ...
	I1002 00:22:03.398257 1535243 cli_runner.go:164] Run: docker container inspect ha-878052-m02 --format={{.State.Status}}
	I1002 00:22:03.428701 1535243 status.go:371] ha-878052-m02 host status = "Stopped" (err=<nil>)
	I1002 00:22:03.428721 1535243 status.go:384] host is not running, skipping remaining checks
	I1002 00:22:03.428729 1535243 status.go:176] ha-878052-m02 status: &{Name:ha-878052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:22:03.428757 1535243 status.go:174] checking status of ha-878052-m04 ...
	I1002 00:22:03.429056 1535243 cli_runner.go:164] Run: docker container inspect ha-878052-m04 --format={{.State.Status}}
	I1002 00:22:03.446007 1535243 status.go:371] ha-878052-m04 host status = "Stopped" (err=<nil>)
	I1002 00:22:03.446029 1535243 status.go:384] host is not running, skipping remaining checks
	I1002 00:22:03.446037 1535243 status.go:176] ha-878052-m04 status: &{Name:ha-878052-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-878052 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-878052 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.244810501s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-878052 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-878052 --control-plane -v=7 --alsologtostderr: (1m9.06327593s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-878052 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-462268 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1002 00:25:45.928572 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-462268 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.304040481s)
--- PASS: TestJSONOutput/start/Command (80.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-462268 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-462268 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-462268 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-462268 --output=json --user=testUser: (5.811615748s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-711199 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-711199 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.935789ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe7c4b4b-a917-4fe3-8aae-d6cc8e488a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-711199] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cf26cbf-e89c-43e9-bc12-b0cab2baa6ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"9ed09c5b-986d-4e34-81a1-a2b9a5ad1a01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"805432e5-bb2c-4227-b76a-bbe3dc3e7aff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig"}}
	{"specversion":"1.0","id":"99f47d94-d37e-4bad-9744-f43c0ec22bf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube"}}
	{"specversion":"1.0","id":"a5368748-c371-41ff-bdf0-546776deb722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bb293a62-ffab-477d-9b3e-400c8c322936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9038aa2d-f30d-46e4-b366-03218337f801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-711199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-711199
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-740993 --network=
E1002 00:26:51.062463 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-740993 --network=: (38.985244287s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-740993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-740993
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-740993: (2.080414913s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-412299 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-412299 --network=bridge: (33.220225991s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-412299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-412299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-412299: (1.967877769s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.21s)

                                                
                                    
x
+
TestKicExistingNetwork (31.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 00:27:46.790567 1468453 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 00:27:46.805852 1468453 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 00:27:46.807372 1468453 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 00:27:46.807414 1468453 cli_runner.go:164] Run: docker network inspect existing-network
W1002 00:27:46.821119 1468453 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 00:27:46.821157 1468453 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 00:27:46.821172 1468453 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 00:27:46.821274 1468453 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 00:27:46.837696 1468453 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f117b98822d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:46:45:0a:e2} reservation:<nil>}
I1002 00:27:46.838773 1468453 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001949310}
I1002 00:27:46.838810 1468453 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 00:27:46.838862 1468453 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 00:27:46.903556 1468453 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-580709 --network=existing-network
E1002 00:28:14.138445 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-580709 --network=existing-network: (29.517383425s)
helpers_test.go:175: Cleaning up "existing-network-580709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-580709
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-580709: (2.005106743s)
I1002 00:28:18.441770 1468453 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.67s)

                                                
                                    
x
+
TestKicCustomSubnet (34.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-871097 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-871097 --subnet=192.168.60.0/24: (32.154487808s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-871097 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-871097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-871097
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-871097: (2.039933492s)
--- PASS: TestKicCustomSubnet (34.22s)

                                                
                                    
x
+
TestKicStaticIP (34.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-084827 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-084827 --static-ip=192.168.200.200: (32.728755478s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-084827 ip
helpers_test.go:175: Cleaning up "static-ip-084827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-084827
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-084827: (2.082964199s)
--- PASS: TestKicStaticIP (34.95s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-954996 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-954996 --driver=docker  --container-runtime=crio: (27.704100806s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-957473 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-957473 --driver=docker  --container-runtime=crio: (32.531668327s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-954996
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-957473
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-957473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-957473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-957473: (1.98105996s)
helpers_test.go:175: Cleaning up "first-954996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-954996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-954996: (1.944152596s)
--- PASS: TestMinikubeProfile (65.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-284750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-284750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.429789807s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-284750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-286621 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1002 00:30:45.928263 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-286621 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.196409451s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-286621 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-284750 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-284750 --alsologtostderr -v=5: (1.62361365s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-286621 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-286621
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-286621: (1.195878667s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-286621
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-286621: (7.060764248s)
--- PASS: TestMountStart/serial/RestartStopped (8.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-286621 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-201529 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 00:31:51.062668 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:32:08.998993 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-201529 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.954223785s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-201529 -- rollout status deployment/busybox: (5.154074724s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-b69dj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-qtx9z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-b69dj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-qtx9z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-b69dj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-qtx9z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-b69dj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-b69dj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-qtx9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-201529 -- exec busybox-7dff88458-qtx9z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-201529 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-201529 -v 3 --alsologtostderr: (59.41779947s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-201529 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp testdata/cp-test.txt multinode-201529:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1098571913/001/cp-test_multinode-201529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529:/home/docker/cp-test.txt multinode-201529-m02:/home/docker/cp-test_multinode-201529_multinode-201529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test_multinode-201529_multinode-201529-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529:/home/docker/cp-test.txt multinode-201529-m03:/home/docker/cp-test_multinode-201529_multinode-201529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test_multinode-201529_multinode-201529-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp testdata/cp-test.txt multinode-201529-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1098571913/001/cp-test_multinode-201529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m02:/home/docker/cp-test.txt multinode-201529:/home/docker/cp-test_multinode-201529-m02_multinode-201529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test_multinode-201529-m02_multinode-201529.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m02:/home/docker/cp-test.txt multinode-201529-m03:/home/docker/cp-test_multinode-201529-m02_multinode-201529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test_multinode-201529-m02_multinode-201529-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp testdata/cp-test.txt multinode-201529-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1098571913/001/cp-test_multinode-201529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m03:/home/docker/cp-test.txt multinode-201529:/home/docker/cp-test_multinode-201529-m03_multinode-201529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529 "sudo cat /home/docker/cp-test_multinode-201529-m03_multinode-201529.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 cp multinode-201529-m03:/home/docker/cp-test.txt multinode-201529-m02:/home/docker/cp-test_multinode-201529-m03_multinode-201529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 ssh -n multinode-201529-m02 "sudo cat /home/docker/cp-test_multinode-201529-m03_multinode-201529-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-201529 node stop m03: (1.198568665s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-201529 status: exit status 7 (499.53257ms)

                                                
                                                
-- stdout --
	multinode-201529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-201529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-201529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr: exit status 7 (494.463898ms)

                                                
                                                
-- stdout --
	multinode-201529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-201529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-201529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:33:44.255688 1588216 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:33:44.256015 1588216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:33:44.256030 1588216 out.go:358] Setting ErrFile to fd 2...
	I1002 00:33:44.256036 1588216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:33:44.256265 1588216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:33:44.256452 1588216 out.go:352] Setting JSON to false
	I1002 00:33:44.256477 1588216 mustload.go:65] Loading cluster: multinode-201529
	I1002 00:33:44.256955 1588216 config.go:182] Loaded profile config "multinode-201529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:33:44.256981 1588216 status.go:174] checking status of multinode-201529 ...
	I1002 00:33:44.257555 1588216 cli_runner.go:164] Run: docker container inspect multinode-201529 --format={{.State.Status}}
	I1002 00:33:44.258088 1588216 notify.go:220] Checking for updates...
	I1002 00:33:44.275732 1588216 status.go:371] multinode-201529 host status = "Running" (err=<nil>)
	I1002 00:33:44.275762 1588216 host.go:66] Checking if "multinode-201529" exists ...
	I1002 00:33:44.276074 1588216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-201529
	I1002 00:33:44.299449 1588216 host.go:66] Checking if "multinode-201529" exists ...
	I1002 00:33:44.299756 1588216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:33:44.299803 1588216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-201529
	I1002 00:33:44.323470 1588216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34429 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/multinode-201529/id_rsa Username:docker}
	I1002 00:33:44.416400 1588216 ssh_runner.go:195] Run: systemctl --version
	I1002 00:33:44.420536 1588216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:33:44.432639 1588216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:33:44.480794 1588216 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-02 00:33:44.470422712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:33:44.481378 1588216 kubeconfig.go:125] found "multinode-201529" server: "https://192.168.67.2:8443"
	I1002 00:33:44.481416 1588216 api_server.go:166] Checking apiserver status ...
	I1002 00:33:44.481461 1588216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:33:44.492715 1588216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I1002 00:33:44.501997 1588216 api_server.go:182] apiserver freezer: "4:freezer:/docker/db80980f321e9b53cb5fcaf0286318a4209bc6f9160560b1b5d45744a374402b/crio/crio-e551c064e32e18c2545a79c20be2957de4beb91261ea0d377f79d829d2163912"
	I1002 00:33:44.502077 1588216 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/db80980f321e9b53cb5fcaf0286318a4209bc6f9160560b1b5d45744a374402b/crio/crio-e551c064e32e18c2545a79c20be2957de4beb91261ea0d377f79d829d2163912/freezer.state
	I1002 00:33:44.510847 1588216 api_server.go:204] freezer state: "THAWED"
	I1002 00:33:44.510873 1588216 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 00:33:44.518832 1588216 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 00:33:44.518862 1588216 status.go:463] multinode-201529 apiserver status = Running (err=<nil>)
	I1002 00:33:44.518872 1588216 status.go:176] multinode-201529 status: &{Name:multinode-201529 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:33:44.518890 1588216 status.go:174] checking status of multinode-201529-m02 ...
	I1002 00:33:44.519345 1588216 cli_runner.go:164] Run: docker container inspect multinode-201529-m02 --format={{.State.Status}}
	I1002 00:33:44.535688 1588216 status.go:371] multinode-201529-m02 host status = "Running" (err=<nil>)
	I1002 00:33:44.535713 1588216 host.go:66] Checking if "multinode-201529-m02" exists ...
	I1002 00:33:44.535998 1588216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-201529-m02
	I1002 00:33:44.552970 1588216 host.go:66] Checking if "multinode-201529-m02" exists ...
	I1002 00:33:44.553285 1588216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:33:44.553337 1588216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-201529-m02
	I1002 00:33:44.573438 1588216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/19740-1463060/.minikube/machines/multinode-201529-m02/id_rsa Username:docker}
	I1002 00:33:44.667941 1588216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:33:44.679245 1588216 status.go:176] multinode-201529-m02 status: &{Name:multinode-201529-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:33:44.679280 1588216 status.go:174] checking status of multinode-201529-m03 ...
	I1002 00:33:44.679599 1588216 cli_runner.go:164] Run: docker container inspect multinode-201529-m03 --format={{.State.Status}}
	I1002 00:33:44.696466 1588216 status.go:371] multinode-201529-m03 host status = "Stopped" (err=<nil>)
	I1002 00:33:44.696501 1588216 status.go:384] host is not running, skipping remaining checks
	I1002 00:33:44.696510 1588216 status.go:176] multinode-201529-m03 status: &{Name:multinode-201529-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-201529 node start m03 -v=7 --alsologtostderr: (10.195393975s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-201529
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-201529
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-201529: (24.820081819s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-201529 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-201529 --wait=true -v=8 --alsologtostderr: (56.496003022s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-201529
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-201529 node delete m03: (4.523598355s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 stop
E1002 00:35:45.928403 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-201529 stop: (23.679548427s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-201529 status: exit status 7 (104.614765ms)

                                                
                                                
-- stdout --
	multinode-201529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-201529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr: exit status 7 (86.037557ms)

                                                
                                                
-- stdout --
	multinode-201529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-201529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:35:46.101799 1595598 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:35:46.102044 1595598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:35:46.102072 1595598 out.go:358] Setting ErrFile to fd 2...
	I1002 00:35:46.102092 1595598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:35:46.102386 1595598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:35:46.102621 1595598 out.go:352] Setting JSON to false
	I1002 00:35:46.102699 1595598 mustload.go:65] Loading cluster: multinode-201529
	I1002 00:35:46.102803 1595598 notify.go:220] Checking for updates...
	I1002 00:35:46.103322 1595598 config.go:182] Loaded profile config "multinode-201529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:35:46.103382 1595598 status.go:174] checking status of multinode-201529 ...
	I1002 00:35:46.103962 1595598 cli_runner.go:164] Run: docker container inspect multinode-201529 --format={{.State.Status}}
	I1002 00:35:46.121211 1595598 status.go:371] multinode-201529 host status = "Stopped" (err=<nil>)
	I1002 00:35:46.121236 1595598 status.go:384] host is not running, skipping remaining checks
	I1002 00:35:46.121244 1595598 status.go:176] multinode-201529 status: &{Name:multinode-201529 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:35:46.121281 1595598 status.go:174] checking status of multinode-201529-m02 ...
	I1002 00:35:46.121629 1595598 cli_runner.go:164] Run: docker container inspect multinode-201529-m02 --format={{.State.Status}}
	I1002 00:35:46.137895 1595598 status.go:371] multinode-201529-m02 host status = "Stopped" (err=<nil>)
	I1002 00:35:46.137916 1595598 status.go:384] host is not running, skipping remaining checks
	I1002 00:35:46.137923 1595598 status.go:176] multinode-201529-m02 status: &{Name:multinode-201529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-201529 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-201529 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.59885294s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-201529 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-201529
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-201529-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-201529-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.931934ms)

                                                
                                                
-- stdout --
	* [multinode-201529-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-201529-m02' is duplicated with machine name 'multinode-201529-m02' in profile 'multinode-201529'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-201529-m03 --driver=docker  --container-runtime=crio
E1002 00:36:51.063194 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-201529-m03 --driver=docker  --container-runtime=crio: (31.660221254s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-201529
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-201529: exit status 80 (324.009234ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-201529 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-201529-m03 already exists in multinode-201529-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-201529-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-201529-m03: (1.965796156s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.09s)

                                                
                                    
x
+
TestPreload (128.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-123210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-123210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.468179284s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-123210 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-123210 image pull gcr.io/k8s-minikube/busybox: (3.139831851s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-123210
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-123210: (5.74475209s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-123210 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-123210 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.017347659s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-123210 image list
helpers_test.go:175: Cleaning up "test-preload-123210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-123210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-123210: (2.324376406s)
--- PASS: TestPreload (128.02s)

                                                
                                    
x
+
TestScheduledStopUnix (108.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-917031 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-917031 --memory=2048 --driver=docker  --container-runtime=crio: (32.030124617s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-917031 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-917031 -n scheduled-stop-917031
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-917031 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 00:39:58.123617 1468453 retry.go:31] will retry after 133.887µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.124777 1468453 retry.go:31] will retry after 82.598µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.125909 1468453 retry.go:31] will retry after 191.028µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.126992 1468453 retry.go:31] will retry after 244.644µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.128087 1468453 retry.go:31] will retry after 662.747µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.129192 1468453 retry.go:31] will retry after 906.359µs: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.130293 1468453 retry.go:31] will retry after 1.501245ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.132468 1468453 retry.go:31] will retry after 1.470965ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.134653 1468453 retry.go:31] will retry after 3.534444ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.138857 1468453 retry.go:31] will retry after 4.969802ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.151416 1468453 retry.go:31] will retry after 7.503327ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.159082 1468453 retry.go:31] will retry after 6.676036ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.166307 1468453 retry.go:31] will retry after 8.623785ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.175541 1468453 retry.go:31] will retry after 11.829445ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.187713 1468453 retry.go:31] will retry after 26.263784ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
I1002 00:39:58.214221 1468453 retry.go:31] will retry after 60.489012ms: open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/scheduled-stop-917031/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-917031 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-917031 -n scheduled-stop-917031
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-917031
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-917031 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 00:40:45.929803 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-917031
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-917031: exit status 7 (69.070238ms)

                                                
                                                
-- stdout --
	scheduled-stop-917031
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-917031 -n scheduled-stop-917031
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-917031 -n scheduled-stop-917031: exit status 7 (66.297908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-917031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-917031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-917031: (4.832338444s)
--- PASS: TestScheduledStopUnix (108.37s)

                                                
                                    
x
+
TestInsufficientStorage (10.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-584738 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-584738 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.888551424s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c331d0fc-bd33-4cfc-83c4-51eea8b46608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-584738] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67d8e1cb-7fb0-4965-8af3-45d21d77791c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"c542bbcb-4f68-4568-9324-7b3676776e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c14e3c3e-b2dd-40ae-a67c-010c9604d187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig"}}
	{"specversion":"1.0","id":"dddaebed-36ef-4952-964b-62e83fd6e9e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube"}}
	{"specversion":"1.0","id":"2e82896c-a4a0-4e9d-85df-4ce79915575d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"57a57c13-b0af-45cb-883e-bd1fa280b061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c10c251c-b62f-472c-b8ee-6ec61e2d3b2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1c62b24f-13cd-45b7-a1d2-ea48edf42b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e1c6a060-5d6c-4fa9-b139-b0ab591b998d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4ea309c-812e-4a9f-8feb-77f20df9b2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ff56cc01-8c0f-4466-bf3d-25ff60ee3d72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-584738\" primary control-plane node in \"insufficient-storage-584738\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"25814376-369d-4d69-a60e-f90576cb8c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"871e8d00-27d4-41a4-868f-aa4a58c24d46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"83212e61-e994-4e91-901e-e708471515e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-584738 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-584738 --output=json --layout=cluster: exit status 7 (285.54103ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-584738","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-584738","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:41:22.157237 1613270 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-584738" does not appear in /home/jenkins/minikube-integration/19740-1463060/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-584738 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-584738 --output=json --layout=cluster: exit status 7 (283.590883ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-584738","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-584738","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:41:22.442954 1613333 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-584738" does not appear in /home/jenkins/minikube-integration/19740-1463060/kubeconfig
	E1002 00:41:22.453153 1613333 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/insufficient-storage-584738/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-584738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-584738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-584738: (1.861599109s)
--- PASS: TestInsufficientStorage (10.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2567755421 start -p running-upgrade-210897 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1002 00:45:45.927391 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2567755421 start -p running-upgrade-210897 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.387522696s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-210897 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-210897 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.800605676s)
helpers_test.go:175: Cleaning up "running-upgrade-210897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-210897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-210897: (3.024440989s)
--- PASS: TestRunningBinaryUpgrade (69.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.123919977s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-973358
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-973358: (2.043047189s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-973358 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-973358 status --format={{.Host}}: exit status 7 (159.887904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.434542753s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-973358 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (104.115859ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-973358] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-973358
	    minikube start -p kubernetes-upgrade-973358 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9733582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-973358 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-973358 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.625317149s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-973358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-973358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-973358: (2.615368424s)
--- PASS: TestKubernetesUpgrade (393.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2275039998 start -p missing-upgrade-795039 --memory=2200 --driver=docker  --container-runtime=crio
E1002 00:41:51.063866 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2275039998 start -p missing-upgrade-795039 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.845097251s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-795039
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-795039: (10.460810285s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-795039
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-795039 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-795039 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.749575398s)
helpers_test.go:175: Cleaning up "missing-upgrade-795039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-795039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-795039: (2.661414146s)
--- PASS: TestMissingContainerUpgrade (160.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (82.936568ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-848035] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-848035 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-848035 --driver=docker  --container-runtime=crio: (37.320555243s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-848035 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --driver=docker  --container-runtime=crio: (7.27283357s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-848035 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-848035 status -o json: exit status 2 (287.424916ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-848035","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-848035
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-848035: (1.901978124s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-848035 --no-kubernetes --driver=docker  --container-runtime=crio: (9.52507116s)
--- PASS: TestNoKubernetes/serial/Start (9.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-848035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-848035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (395.647147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (1.029493306s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-848035
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-848035: (1.272800446s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-848035 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-848035 --driver=docker  --container-runtime=crio: (7.486726626s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-848035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-848035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.132822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2695001941 start -p stopped-upgrade-378223 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2695001941 start -p stopped-upgrade-378223 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.202862358s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2695001941 -p stopped-upgrade-378223 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2695001941 -p stopped-upgrade-378223 stop: (2.705105271s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-378223 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 00:44:54.139828 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-378223 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.988376103s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-378223
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestPause/serial/Start (76.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-162921 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1002 00:46:51.062674 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-162921 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.696018586s)
--- PASS: TestPause/serial/Start (76.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-162921 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-162921 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.853665186s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.87s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-162921 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-162921 --alsologtostderr -v=5: (1.035935836s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-162921 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-162921 --output=json --layout=cluster: exit status 2 (417.030694ms)

                                                
                                                
-- stdout --
	{"Name":"pause-162921","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-162921","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-162921 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-162921 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-162921 --alsologtostderr -v=5: (1.09702637s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-162921 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-162921 --alsologtostderr -v=5: (3.12446391s)
--- PASS: TestPause/serial/DeletePaused (3.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-162921
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-162921: exit status 1 (20.703567ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-162921: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-028074 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-028074 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (269.631762ms)

                                                
                                                
-- stdout --
	* [false-028074] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:49:12.001422 1653656 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:49:12.001684 1653656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:49:12.001710 1653656 out.go:358] Setting ErrFile to fd 2...
	I1002 00:49:12.001716 1653656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:49:12.002269 1653656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1463060/.minikube/bin
	I1002 00:49:12.002909 1653656 out.go:352] Setting JSON to false
	I1002 00:49:12.004343 1653656 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23492,"bootTime":1727806660,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 00:49:12.004435 1653656 start.go:139] virtualization:  
	I1002 00:49:12.009143 1653656 out.go:177] * [false-028074] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1002 00:49:12.011531 1653656 notify.go:220] Checking for updates...
	I1002 00:49:12.012149 1653656 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:49:12.014829 1653656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:49:12.017115 1653656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1463060/kubeconfig
	I1002 00:49:12.019382 1653656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1463060/.minikube
	I1002 00:49:12.021683 1653656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:49:12.024093 1653656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:49:12.027839 1653656 config.go:182] Loaded profile config "force-systemd-flag-341519": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:49:12.028019 1653656 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:49:12.078591 1653656 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:49:12.078720 1653656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:49:12.143864 1653656 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:41 SystemTime:2024-10-02 00:49:12.133269263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:49:12.143971 1653656 docker.go:318] overlay module found
	I1002 00:49:12.147243 1653656 out.go:177] * Using the docker driver based on user configuration
	I1002 00:49:12.149685 1653656 start.go:297] selected driver: docker
	I1002 00:49:12.149708 1653656 start.go:901] validating driver "docker" against <nil>
	I1002 00:49:12.149722 1653656 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:49:12.153408 1653656 out.go:201] 
	W1002 00:49:12.159301 1653656 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 00:49:12.162053 1653656 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-028074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-028074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028074"

                                                
                                                
----------------------- debugLogs end: false-028074 [took: 4.471688817s] --------------------------------
helpers_test.go:175: Cleaning up "false-028074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-028074
--- PASS: TestNetworkPlugins/group/false (4.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (191.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-633357 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1002 00:50:45.927394 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:51:51.062853 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-633357 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m11.008953495s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (191.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-657875 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-657875 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m21.800043119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-633357 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87ffd143-77c4-4b22-9149-14216294c754] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [87ffd143-77c4-4b22-9149-14216294c754] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005233718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-633357 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-633357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-633357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192967957s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-633357 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-633357 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-633357 --alsologtostderr -v=3: (12.255246667s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633357 -n old-k8s-version-633357
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633357 -n old-k8s-version-633357: exit status 7 (99.37396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-633357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-633357 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-633357 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.160690767s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633357 -n old-k8s-version-633357
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-657875 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7b54c04-56a0-492f-98f8-b2326a124923] Pending
helpers_test.go:344: "busybox" [d7b54c04-56a0-492f-98f8-b2326a124923] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7b54c04-56a0-492f-98f8-b2326a124923] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004752386s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-657875 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-657875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-657875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126073092s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-657875 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-657875 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-657875 --alsologtostderr -v=3: (11.951994993s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875: exit status 7 (69.466592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-657875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-657875 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:55:45.927036 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-657875 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.802568646s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dnl2h" [4f285821-d605-4adb-bf5b-94439e7602d8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003695537s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dnl2h" [4f285821-d605-4adb-bf5b-94439e7602d8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003785198s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-633357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-633357 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-633357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633357 -n old-k8s-version-633357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633357 -n old-k8s-version-633357: exit status 2 (332.206384ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633357 -n old-k8s-version-633357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633357 -n old-k8s-version-633357: exit status 2 (317.507003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-633357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633357 -n old-k8s-version-633357
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633357 -n old-k8s-version-633357
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-824269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:56:51.063034 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-824269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m17.451699222s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-824269 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [903cb258-9d1a-492b-b168-11a02963a74b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [903cb258-9d1a-492b-b168-11a02963a74b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003480011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-824269 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-824269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-824269 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-824269 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-824269 --alsologtostderr -v=3: (12.050466473s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824269 -n embed-certs-824269
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824269 -n embed-certs-824269: exit status 7 (82.604183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-824269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-824269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:58:41.086279 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.092567 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.103858 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.125168 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.166480 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.247833 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.409265 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:41.731209 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:42.372963 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:43.654905 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:46.216511 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:58:51.338279 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:59:01.580351 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:59:22.061773 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-824269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m48.041801014s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824269 -n embed-certs-824269
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dx6rw" [74a13476-36c2-4384-9cf7-510af2e892f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004091675s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dx6rw" [74a13476-36c2-4384-9cf7-510af2e892f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004669396s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-657875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-657875 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-657875 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875: exit status 2 (327.614288ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875: exit status 2 (303.135867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-657875 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-657875 -n default-k8s-diff-port-657875
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-425160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 01:00:45.927268 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-425160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m2.265062155s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-425160 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e31e88de-1043-4b1d-b73e-4afbaaccfbe8] Pending
helpers_test.go:344: "busybox" [e31e88de-1043-4b1d-b73e-4afbaaccfbe8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e31e88de-1043-4b1d-b73e-4afbaaccfbe8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004546716s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-425160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-425160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-425160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017637253s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-425160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-425160 --alsologtostderr -v=3
E1002 01:01:24.945770 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-425160 --alsologtostderr -v=3: (12.030817509s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-425160 -n no-preload-425160
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-425160 -n no-preload-425160: exit status 7 (77.418716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-425160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (281.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-425160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 01:01:34.142949 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:01:51.062471 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-425160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m40.930786194s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-425160 -n no-preload-425160
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (281.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2lksk" [3ce88768-63fc-4adf-920c-b88832f47306] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003506681s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2lksk" [3ce88768-63fc-4adf-920c-b88832f47306] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005182313s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-824269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-824269 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-824269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824269 -n embed-certs-824269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824269 -n embed-certs-824269: exit status 2 (321.771537ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824269 -n embed-certs-824269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824269 -n embed-certs-824269: exit status 2 (313.73113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-824269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824269 -n embed-certs-824269
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824269 -n embed-certs-824269
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-424918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 01:03:41.086075 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:08.788041 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-424918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (35.165954902s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-424918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-424918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037978824s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-424918 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-424918 --alsologtostderr -v=3: (1.292291923s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424918 -n newest-cni-424918
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424918 -n newest-cni-424918: exit status 7 (65.628755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-424918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-424918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-424918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (14.623363126s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424918 -n newest-cni-424918
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-424918 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-424918 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424918 -n newest-cni-424918
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424918 -n newest-cni-424918: exit status 2 (321.216941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424918 -n newest-cni-424918
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424918 -n newest-cni-424918: exit status 2 (337.841774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-424918 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424918 -n newest-cni-424918
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424918 -n newest-cni-424918
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1002 01:04:51.910614 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:51.917175 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:51.928521 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:51.949882 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:51.991307 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:52.072798 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:52.234260 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:52.555699 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:53.196977 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:54.478784 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:04:57.041065 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:05:02.162522 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:05:12.404693 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:05:29.005297 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (57.482876702s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4vqrj" [52c80f55-c3b0-4e9c-9d56-af6e38959e6d] Running
E1002 01:05:32.886990 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004890607s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-028074 "pgrep -a kubelet"
I1002 01:05:37.940831 1468453 config.go:182] Loaded profile config "calico-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6z9dr" [0c27547e-a03c-49a4-8f0b-fba3dfba1ac7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6z9dr" [0c27547e-a03c-49a4-8f0b-fba3dfba1ac7] Running
E1002 01:05:45.927380 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003703119s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jt6gk" [27dacfc7-cf57-4691-883c-3d06c4102466] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003510802s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1002 01:06:13.849296 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.101979292s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jt6gk" [27dacfc7-cf57-4691-883c-3d06c4102466] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004797847s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-425160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-425160 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-425160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-425160 --alsologtostderr -v=1: (1.183578923s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-425160 -n no-preload-425160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-425160 -n no-preload-425160: exit status 2 (489.80398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-425160 -n no-preload-425160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-425160 -n no-preload-425160: exit status 2 (423.388799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-425160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-425160 --alsologtostderr -v=1: (1.008670995s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-425160 -n no-preload-425160
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-425160 -n no-preload-425160
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1002 01:06:51.062409 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/addons-902832/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.703635758s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-028074 "pgrep -a kubelet"
I1002 01:07:05.723588 1468453 config.go:182] Loaded profile config "auto-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h77jm" [45ba5cd2-5f3b-4ec8-8e41-3a310667be9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h77jm" [45ba5cd2-5f3b-4ec8-8e41-3a310667be9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003946332s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-028074 "pgrep -a kubelet"
I1002 01:07:31.104129 1468453 config.go:182] Loaded profile config "custom-flannel-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s5jd2" [8665b207-aa0c-4c7a-8d12-0b02c13ac190] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 01:07:35.771084 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-s5jd2" [8665b207-aa0c-4c7a-8d12-0b02c13ac190] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004119228s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m28.117530973s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1002 01:08:41.086223 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/old-k8s-version-633357/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.473230154s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pdx5s" [d2bde834-672f-466c-81b1-154d2b0476c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004711292s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-028074 "pgrep -a kubelet"
I1002 01:09:00.611104 1468453 config.go:182] Loaded profile config "flannel-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xvx9n" [56c23075-abec-4388-8886-e9653167b814] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xvx9n" [56c23075-abec-4388-8886-e9653167b814] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004136953s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5b9mv" [d9ca59c8-9425-4027-a46f-6b8df0edcb8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003991651s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-028074 "pgrep -a kubelet"
I1002 01:09:13.987296 1468453 config.go:182] Loaded profile config "kindnet-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sh8wn" [ba34a84f-438c-46fc-a5b4-d33194e0bae9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sh8wn" [ba34a84f-438c-46fc-a5b4-d33194e0bae9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004213488s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.520370958s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1002 01:09:51.910250 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:19.613375 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/default-k8s-diff-port-657875/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.649790 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.656107 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.667469 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.689655 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.731029 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.812454 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:31.974225 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:32.296034 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:32.938200 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:34.219795 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:36.782013 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:41.903645 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
E1002 01:10:45.927433 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/functional-744852/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-028074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m22.297523725s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-028074 "pgrep -a kubelet"
I1002 01:10:51.884298 1468453 config.go:182] Loaded profile config "enable-default-cni-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-028074 replace --force -f testdata/netcat-deployment.yaml
E1002 01:10:52.146180 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/calico-028074/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tvqnp" [fc902f3e-4b63-4c88-8d2f-209c7e69e1c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tvqnp" [fc902f3e-4b63-4c88-8d2f-209c7e69e1c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003175193s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-028074 "pgrep -a kubelet"
I1002 01:11:13.633368 1468453 config.go:182] Loaded profile config "bridge-028074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-028074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zjmjh" [31636486-9cef-42b1-9f5c-7591d88d9724] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 01:11:16.406689 1468453 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/no-preload-425160/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zjmjh" [31636486-9cef-42b1-9f5c-7591d88d9724] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004429s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-028074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-028074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-549806 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-549806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-549806
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:783: skipping: crio not supported
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-902832 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-824750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-824750
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-028074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-1463060/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:49:06 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-341519
contexts:
- context:
cluster: force-systemd-flag-341519
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:49:06 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-341519
name: force-systemd-flag-341519
current-context: force-systemd-flag-341519
kind: Config
preferences: {}
users:
- name: force-systemd-flag-341519
user:
client-certificate: /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/force-systemd-flag-341519/client.crt
client-key: /home/jenkins/minikube-integration/19740-1463060/.minikube/profiles/force-systemd-flag-341519/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-028074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028074"

                                                
                                                
----------------------- debugLogs end: kubenet-028074 [took: 4.272349445s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-028074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-028074
--- SKIP: TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-028074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-028074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-028074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-028074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028074"

                                                
                                                
----------------------- debugLogs end: cilium-028074 [took: 5.305391276s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-028074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-028074
--- SKIP: TestNetworkPlugins/group/cilium (5.50s)

                                                
                                    
Copied to clipboard