Test Report: Docker_Linux_crio_arm64 19790

                    
                      b9d2e2c9658f87d0032c63e9ff5f9056e8d14f14:2024-10-14:36644
                    
                

Test fail (2/329)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.54
37 TestAddons/parallel/MetricsServer 346.12
x
+
TestAddons/parallel/Ingress (152.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-002422 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-002422 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-002422 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c8e1b210-5413-4f21-96cb-5ccf9e2929b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c8e1b210-5413-4f21-96cb-5ccf9e2929b8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003955006s
I1014 13:43:45.379905    7544 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-002422 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.314787333s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-002422 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-002422
helpers_test.go:235: (dbg) docker inspect addons-002422:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c",
	        "Created": "2024-10-14T13:39:26.040660176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-14T13:39:26.200141481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/hosts",
	        "LogPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c-json.log",
	        "Name": "/addons-002422",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-002422:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-002422",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b-init/diff:/var/lib/docker/overlay2/0fbe7ab461eb9f9a72ecb1d2c088de9e51a70b12c6d6de37aeffa6e2c5634bdc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-002422",
	                "Source": "/var/lib/docker/volumes/addons-002422/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-002422",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-002422",
	                "name.minikube.sigs.k8s.io": "addons-002422",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0aed6d17065638fabcf4af9629eb2706f94c1b790a82245b3b3aad651ea1da99",
	            "SandboxKey": "/var/run/docker/netns/0aed6d170656",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-002422": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0ff409cb6a6d634b31679069de159a6c4d604dc8e7199db02844607a2ed8ceed",
	                    "EndpointID": "6ecc69181af9927db04c9d672fff7ea2ed76c70627324bcf71e3d5589e3b0324",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-002422",
	                        "05e13f44fa23"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-002422 -n addons-002422
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 logs -n 25: (1.563613616s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-457703              | download-only-457703   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | -o=json --download-only              | download-only-347934   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-347934              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-347934              | download-only-347934   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-457703              | download-only-457703   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| delete  | -p download-only-347934              | download-only-347934   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| start   | --download-only -p                   | download-docker-849591 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | download-docker-849591               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-849591            | download-docker-849591 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| start   | --download-only -p                   | binary-mirror-893512   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | binary-mirror-893512                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35277               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-893512              | binary-mirror-893512   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| addons  | disable dashboard -p                 | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-002422                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-002422                        |                        |         |         |                     |                     |
	| start   | -p addons-002422 --wait=true         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | -p addons-002422                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-002422 ip                     | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	| addons  | addons-002422 addons disable         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                 | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                 | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                 | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-002422 ssh curl -s            | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-002422 ip                     | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:45 UTC | 14 Oct 24 13:45 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:39:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:39:01.519189    8300 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:39:01.519388    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.519399    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:39:01.519408    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.519689    8300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:39:01.520212    8300 out.go:352] Setting JSON to false
	I1014 13:39:01.521042    8300 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1292,"bootTime":1728911849,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:39:01.521114    8300 start.go:139] virtualization:  
	I1014 13:39:01.523571    8300 out.go:177] * [addons-002422] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:39:01.525747    8300 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:39:01.525781    8300 notify.go:220] Checking for updates...
	I1014 13:39:01.529853    8300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:39:01.531546    8300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:39:01.532842    8300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:39:01.534232    8300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:39:01.535691    8300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:39:01.537232    8300 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:39:01.564142    8300 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:39:01.564253    8300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:01.620162    8300 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:01.611082175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:01.620265    8300 docker.go:318] overlay module found
	I1014 13:39:01.621991    8300 out.go:177] * Using the docker driver based on user configuration
	I1014 13:39:01.623225    8300 start.go:297] selected driver: docker
	I1014 13:39:01.623240    8300 start.go:901] validating driver "docker" against <nil>
	I1014 13:39:01.623253    8300 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:39:01.623855    8300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:01.686515    8300 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:01.677396922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:01.686715    8300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:39:01.686954    8300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:39:01.688844    8300 out.go:177] * Using Docker driver with root privileges
	I1014 13:39:01.690119    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:01.690189    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:01.690213    8300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:39:01.690301    8300 start.go:340] cluster config:
	{Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:01.691786    8300 out.go:177] * Starting "addons-002422" primary control-plane node in "addons-002422" cluster
	I1014 13:39:01.692864    8300 cache.go:121] Beginning downloading kic base image for docker with crio
	I1014 13:39:01.694108    8300 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:39:01.695781    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:01.695827    8300 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1014 13:39:01.695838    8300 cache.go:56] Caching tarball of preloaded images
	I1014 13:39:01.695840    8300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:39:01.695914    8300 preload.go:172] Found /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 13:39:01.695924    8300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:39:01.696276    8300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json ...
	I1014 13:39:01.696300    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json: {Name:mke32a7b3203164b7b45aacc3b9f08280e6d7f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:01.712115    8300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:39:01.712224    8300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:39:01.712242    8300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1014 13:39:01.712246    8300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1014 13:39:01.712253    8300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1014 13:39:01.712258    8300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1014 13:39:18.423690    8300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1014 13:39:18.423728    8300 cache.go:194] Successfully downloaded all kic artifacts
	I1014 13:39:18.423768    8300 start.go:360] acquireMachinesLock for addons-002422: {Name:mkd84a4fa8b14773f3ba751e5d68c67ef06bd4f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:39:18.423889    8300 start.go:364] duration metric: took 99.971µs to acquireMachinesLock for "addons-002422"
	I1014 13:39:18.423920    8300 start.go:93] Provisioning new machine with config: &{Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:18.424000    8300 start.go:125] createHost starting for "" (driver="docker")
	I1014 13:39:18.426424    8300 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1014 13:39:18.426686    8300 start.go:159] libmachine.API.Create for "addons-002422" (driver="docker")
	I1014 13:39:18.426720    8300 client.go:168] LocalClient.Create starting
	I1014 13:39:18.426812    8300 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem
	I1014 13:39:18.926000    8300 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem
	I1014 13:39:19.558813    8300 cli_runner.go:164] Run: docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 13:39:19.574302    8300 cli_runner.go:211] docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 13:39:19.574389    8300 network_create.go:284] running [docker network inspect addons-002422] to gather additional debugging logs...
	I1014 13:39:19.574411    8300 cli_runner.go:164] Run: docker network inspect addons-002422
	W1014 13:39:19.589486    8300 cli_runner.go:211] docker network inspect addons-002422 returned with exit code 1
	I1014 13:39:19.589523    8300 network_create.go:287] error running [docker network inspect addons-002422]: docker network inspect addons-002422: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-002422 not found
	I1014 13:39:19.589536    8300 network_create.go:289] output of [docker network inspect addons-002422]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-002422 not found
	
	** /stderr **
	I1014 13:39:19.589632    8300 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:19.605243    8300 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400055c310}
	I1014 13:39:19.605285    8300 network_create.go:124] attempt to create docker network addons-002422 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 13:39:19.605337    8300 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-002422 addons-002422
	I1014 13:39:19.672554    8300 network_create.go:108] docker network addons-002422 192.168.49.0/24 created
	I1014 13:39:19.672581    8300 kic.go:121] calculated static IP "192.168.49.2" for the "addons-002422" container
	I1014 13:39:19.672660    8300 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 13:39:19.687481    8300 cli_runner.go:164] Run: docker volume create addons-002422 --label name.minikube.sigs.k8s.io=addons-002422 --label created_by.minikube.sigs.k8s.io=true
	I1014 13:39:19.709849    8300 oci.go:103] Successfully created a docker volume addons-002422
	I1014 13:39:19.709939    8300 cli_runner.go:164] Run: docker run --rm --name addons-002422-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --entrypoint /usr/bin/test -v addons-002422:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1014 13:39:21.906996    8300 cli_runner.go:217] Completed: docker run --rm --name addons-002422-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --entrypoint /usr/bin/test -v addons-002422:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.197000817s)
	I1014 13:39:21.907030    8300 oci.go:107] Successfully prepared a docker volume addons-002422
	I1014 13:39:21.907049    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:21.907067    8300 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 13:39:21.907137    8300 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-002422:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 13:39:25.970946    8300 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-002422:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.063768387s)
	I1014 13:39:25.970975    8300 kic.go:203] duration metric: took 4.063905487s to extract preloaded images to volume ...
	W1014 13:39:25.971118    8300 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 13:39:25.971246    8300 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 13:39:26.025710    8300 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-002422 --name addons-002422 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-002422 --network addons-002422 --ip 192.168.49.2 --volume addons-002422:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1014 13:39:26.381604    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Running}}
	I1014 13:39:26.403583    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:26.426817    8300 cli_runner.go:164] Run: docker exec addons-002422 stat /var/lib/dpkg/alternatives/iptables
	I1014 13:39:26.492116    8300 oci.go:144] the created container "addons-002422" has a running status.
	I1014 13:39:26.492143    8300 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa...
	I1014 13:39:27.159451    8300 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 13:39:27.183362    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:27.205625    8300 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 13:39:27.205645    8300 kic_runner.go:114] Args: [docker exec --privileged addons-002422 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 13:39:27.286588    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:27.318504    8300 machine.go:93] provisionDockerMachine start ...
	I1014 13:39:27.318598    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.342014    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.342285    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.342296    8300 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 13:39:27.476294    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-002422
	
	I1014 13:39:27.476315    8300 ubuntu.go:169] provisioning hostname "addons-002422"
	I1014 13:39:27.476377    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.498515    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.498751    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.498763    8300 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-002422 && echo "addons-002422" | sudo tee /etc/hostname
	I1014 13:39:27.649621    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-002422
	
	I1014 13:39:27.649757    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.670449    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.670685    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.670702    8300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-002422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-002422/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-002422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:39:27.796523    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:27.796547    8300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19790-2228/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-2228/.minikube}
	I1014 13:39:27.796597    8300 ubuntu.go:177] setting up certificates
	I1014 13:39:27.796609    8300 provision.go:84] configureAuth start
	I1014 13:39:27.796680    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:27.813604    8300 provision.go:143] copyHostCerts
	I1014 13:39:27.813686    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/key.pem (1675 bytes)
	I1014 13:39:27.813805    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/ca.pem (1082 bytes)
	I1014 13:39:27.813863    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/cert.pem (1123 bytes)
	I1014 13:39:27.813939    8300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem org=jenkins.addons-002422 san=[127.0.0.1 192.168.49.2 addons-002422 localhost minikube]
	I1014 13:39:28.604899    8300 provision.go:177] copyRemoteCerts
	I1014 13:39:28.604976    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:39:28.605031    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:28.621097    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:28.713880    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 13:39:28.737206    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:39:28.760651    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:39:28.783892    8300 provision.go:87] duration metric: took 987.268952ms to configureAuth
	I1014 13:39:28.783928    8300 ubuntu.go:193] setting minikube options for container-runtime
	I1014 13:39:28.784128    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:28.784234    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:28.801092    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:28.801333    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:28.801366    8300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:39:29.021776    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:39:29.021839    8300 machine.go:96] duration metric: took 1.703315975s to provisionDockerMachine
	I1014 13:39:29.021866    8300 client.go:171] duration metric: took 10.595136953s to LocalClient.Create
	I1014 13:39:29.021891    8300 start.go:167] duration metric: took 10.595203636s to libmachine.API.Create "addons-002422"
	I1014 13:39:29.021923    8300 start.go:293] postStartSetup for "addons-002422" (driver="docker")
	I1014 13:39:29.021950    8300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:39:29.022059    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:39:29.022138    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.039955    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.138030    8300 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:39:29.141073    8300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 13:39:29.141105    8300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1014 13:39:29.141118    8300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1014 13:39:29.141125    8300 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1014 13:39:29.141135    8300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2228/.minikube/addons for local assets ...
	I1014 13:39:29.141205    8300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2228/.minikube/files for local assets ...
	I1014 13:39:29.141241    8300 start.go:296] duration metric: took 119.286948ms for postStartSetup
	I1014 13:39:29.141939    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:29.161897    8300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json ...
	I1014 13:39:29.162248    8300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:39:29.162301    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.179425    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.273204    8300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 13:39:29.277388    8300 start.go:128] duration metric: took 10.853372344s to createHost
	I1014 13:39:29.277420    8300 start.go:83] releasing machines lock for "addons-002422", held for 10.853516426s
	I1014 13:39:29.277488    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:29.292658    8300 ssh_runner.go:195] Run: cat /version.json
	I1014 13:39:29.292711    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.293039    8300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:39:29.293123    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.309343    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.318878    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.400173    8300 ssh_runner.go:195] Run: systemctl --version
	I1014 13:39:29.535275    8300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:39:29.682522    8300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 13:39:29.686453    8300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:29.706862    8300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1014 13:39:29.706974    8300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:29.734354    8300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1014 13:39:29.734375    8300 start.go:495] detecting cgroup driver to use...
	I1014 13:39:29.734406    8300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 13:39:29.734454    8300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:39:29.749192    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:39:29.760184    8300 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:39:29.760246    8300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:39:29.774112    8300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:39:29.788395    8300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:39:29.880801    8300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:39:29.972415    8300 docker.go:233] disabling docker service ...
	I1014 13:39:29.972481    8300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:39:29.992061    8300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:39:30.011825    8300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:39:30.109186    8300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:39:30.209178    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:39:30.221080    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:39:30.237424    8300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:39:30.237513    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.247070    8300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:39:30.247171    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.256865    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.266642    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.277380    8300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:39:30.286952    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.297328    8300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.313264    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.323340    8300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:39:30.331728    8300 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:39:30.331833    8300 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:39:30.345487    8300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:39:30.354185    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:30.443199    8300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:39:30.559773    8300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:39:30.559901    8300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:39:30.563352    8300 start.go:563] Will wait 60s for crictl version
	I1014 13:39:30.563470    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:39:30.567069    8300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:39:30.608005    8300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1014 13:39:30.608183    8300 ssh_runner.go:195] Run: crio --version
	I1014 13:39:30.644625    8300 ssh_runner.go:195] Run: crio --version
	I1014 13:39:30.685006    8300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1014 13:39:30.686361    8300 cli_runner.go:164] Run: docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:30.703035    8300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 13:39:30.706678    8300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.717505    8300 kubeadm.go:883] updating cluster {Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:39:30.717623    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:30.717682    8300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.792087    8300 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:30.792116    8300 crio.go:433] Images already preloaded, skipping extraction
	I1014 13:39:30.792174    8300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.827792    8300 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:30.827816    8300 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:39:30.827824    8300 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1014 13:39:30.827955    8300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-002422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:39:30.828039    8300 ssh_runner.go:195] Run: crio config
	I1014 13:39:30.874149    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:30.874171    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:30.874181    8300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:39:30.874224    8300 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-002422 NodeName:addons-002422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:39:30.874361    8300 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-002422"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:39:30.874429    8300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:39:30.882973    8300 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:39:30.883071    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 13:39:30.892223    8300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 13:39:30.909506    8300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:39:30.926769    8300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1014 13:39:30.944321    8300 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 13:39:30.947745    8300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.958400    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:31.045686    8300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:31.059522    8300 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422 for IP: 192.168.49.2
	I1014 13:39:31.059593    8300 certs.go:194] generating shared ca certs ...
	I1014 13:39:31.059622    8300 certs.go:226] acquiring lock for ca certs: {Name:mk06df15dc793252bd5ffa6daa3e5f2510797850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.059783    8300 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key
	I1014 13:39:31.279549    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt ...
	I1014 13:39:31.279582    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt: {Name:mkf2e09cdeaf406bd5dbfb6df51fda19d11b3a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.279812    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key ...
	I1014 13:39:31.279826    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key: {Name:mkbb0140f8b18956b3e337fe5d9dac3444c3cff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.279917    8300 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key
	I1014 13:39:32.102633    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt ...
	I1014 13:39:32.102667    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt: {Name:mk87e80ab56810a443caa4380c01f4fa59f6347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.102908    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key ...
	I1014 13:39:32.102928    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key: {Name:mk3f83de2f8ad31643196f738fbd59675505d818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.103014    8300 certs.go:256] generating profile certs ...
	I1014 13:39:32.103079    8300 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key
	I1014 13:39:32.103097    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt with IP's: []
	I1014 13:39:32.527349    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt ...
	I1014 13:39:32.527383    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: {Name:mk7e896bcb1761dc92896d4828a4f921b266d096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.527596    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key ...
	I1014 13:39:32.527612    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key: {Name:mk2471b3e7dfa66ccab07ee70fc530ef48ac5f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.527706    8300 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0
	I1014 13:39:32.527726    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 13:39:33.097055    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 ...
	I1014 13:39:33.097092    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0: {Name:mk0b396ed04de990231c7535e37286cbdddbeccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.097278    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0 ...
	I1014 13:39:33.097292    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0: {Name:mkeaac9f624665f13ab091190d99656a19ad24ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.097375    8300 certs.go:381] copying /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 -> /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt
	I1014 13:39:33.097463    8300 certs.go:385] copying /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0 -> /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key
	I1014 13:39:33.097517    8300 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key
	I1014 13:39:33.097536    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt with IP's: []
	I1014 13:39:33.368114    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt ...
	I1014 13:39:33.368146    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt: {Name:mk617006c2b50b41e3bf3976f48c6e2173294ddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.368332    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key ...
	I1014 13:39:33.368345    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key: {Name:mk2d2071f6a997e883c7ef5cbfc1c62f114134be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.368542    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:39:33.368583    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem (1082 bytes)
	I1014 13:39:33.368611    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:39:33.368640    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem (1675 bytes)
	I1014 13:39:33.369276    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:39:33.396664    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 13:39:33.421787    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:39:33.446268    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 13:39:33.470393    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 13:39:33.498381    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:39:33.522109    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:39:33.545983    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 13:39:33.569791    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:39:33.594911    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:39:33.613470    8300 ssh_runner.go:195] Run: openssl version
	I1014 13:39:33.618883    8300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:39:33.628463    8300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.631697    8300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.631783    8300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.638573    8300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:39:33.647929    8300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:39:33.651145    8300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:39:33.651190    8300 kubeadm.go:392] StartCluster: {Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:33.651267    8300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:39:33.651321    8300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:39:33.690330    8300 cri.go:89] found id: ""
	I1014 13:39:33.690396    8300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:39:33.699200    8300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:39:33.707904    8300 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 13:39:33.707968    8300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:39:33.716528    8300 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:39:33.716548    8300 kubeadm.go:157] found existing configuration files:
	
	I1014 13:39:33.716598    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:39:33.725169    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:39:33.725233    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:39:33.733899    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:39:33.742056    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:39:33.742158    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:39:33.750333    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.758992    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:39:33.759078    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.767897    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:39:33.776528    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:39:33.776599    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:39:33.785190    8300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 13:39:33.823923    8300 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:39:33.824199    8300 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:39:33.844918    8300 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1014 13:39:33.845063    8300 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1014 13:39:33.845122    8300 kubeadm.go:310] OS: Linux
	I1014 13:39:33.845223    8300 kubeadm.go:310] CGROUPS_CPU: enabled
	I1014 13:39:33.845287    8300 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1014 13:39:33.845338    8300 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1014 13:39:33.845390    8300 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1014 13:39:33.845445    8300 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1014 13:39:33.845501    8300 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1014 13:39:33.845550    8300 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1014 13:39:33.845602    8300 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1014 13:39:33.845653    8300 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1014 13:39:33.916605    8300 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:39:33.916807    8300 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:39:33.916916    8300 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:39:33.925110    8300 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:39:33.930119    8300 out.go:235]   - Generating certificates and keys ...
	I1014 13:39:33.930214    8300 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:39:33.930332    8300 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:39:34.198508    8300 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:39:34.622750    8300 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:39:34.805332    8300 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:39:35.248566    8300 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:39:35.947522    8300 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:39:35.947821    8300 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-002422 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.290501    8300 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:39:36.295957    8300 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-002422 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.540393    8300 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:39:36.985910    8300 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:39:37.131122    8300 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:39:37.131491    8300 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:39:37.561848    8300 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:39:38.018910    8300 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:39:38.921446    8300 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:39:39.097017    8300 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:39:39.398377    8300 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:39:39.399030    8300 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:39:39.401991    8300 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:39:39.403670    8300 out.go:235]   - Booting up control plane ...
	I1014 13:39:39.403765    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:39:39.403841    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:39:39.404568    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:39:39.414705    8300 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:39:39.420512    8300 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:39:39.420912    8300 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:39:39.515212    8300 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:39:39.515331    8300 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:39:40.517315    8300 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001877986s
	I1014 13:39:40.517407    8300 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:39:46.520416    8300 kubeadm.go:310] [api-check] The API server is healthy after 6.001293176s
	I1014 13:39:46.537487    8300 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:39:46.551261    8300 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:39:46.577727    8300 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:39:46.577920    8300 kubeadm.go:310] [mark-control-plane] Marking the node addons-002422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:39:46.587508    8300 kubeadm.go:310] [bootstrap-token] Using token: p0ldfg.l4f8resh3yr04gj6
	I1014 13:39:46.588848    8300 out.go:235]   - Configuring RBAC rules ...
	I1014 13:39:46.588969    8300 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:39:46.594842    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:39:46.605038    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:39:46.610706    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:39:46.615031    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:39:46.619515    8300 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:39:46.925078    8300 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:39:47.354345    8300 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:39:47.924387    8300 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:39:47.925552    8300 kubeadm.go:310] 
	I1014 13:39:47.925643    8300 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:39:47.925659    8300 kubeadm.go:310] 
	I1014 13:39:47.925756    8300 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:39:47.925765    8300 kubeadm.go:310] 
	I1014 13:39:47.925801    8300 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:39:47.925880    8300 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:39:47.925939    8300 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:39:47.925943    8300 kubeadm.go:310] 
	I1014 13:39:47.926001    8300 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:39:47.926005    8300 kubeadm.go:310] 
	I1014 13:39:47.926057    8300 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:39:47.926061    8300 kubeadm.go:310] 
	I1014 13:39:47.926116    8300 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:39:47.926206    8300 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:39:47.926279    8300 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:39:47.926283    8300 kubeadm.go:310] 
	I1014 13:39:47.926378    8300 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:39:47.926460    8300 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:39:47.926464    8300 kubeadm.go:310] 
	I1014 13:39:47.926553    8300 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0ldfg.l4f8resh3yr04gj6 \
	I1014 13:39:47.926662    8300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7f4316051a451070b62e5ea00267a1d9ae2a3434782771c12eaedf3124887c0a \
	I1014 13:39:47.926684    8300 kubeadm.go:310] 	--control-plane 
	I1014 13:39:47.926688    8300 kubeadm.go:310] 
	I1014 13:39:47.926779    8300 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:39:47.926783    8300 kubeadm.go:310] 
	I1014 13:39:47.926870    8300 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0ldfg.l4f8resh3yr04gj6 \
	I1014 13:39:47.926979    8300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7f4316051a451070b62e5ea00267a1d9ae2a3434782771c12eaedf3124887c0a 
	I1014 13:39:47.929366    8300 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1014 13:39:47.929552    8300 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:39:47.929594    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:47.929630    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:47.931645    8300 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:39:47.932940    8300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:39:47.936529    8300 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:39:47.936549    8300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:39:47.953687    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:39:48.223733    8300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:39:48.223882    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:48.223931    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-002422 minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=addons-002422 minikube.k8s.io/primary=true
	I1014 13:39:48.381596    8300 ops.go:34] apiserver oom_adj: -16
	I1014 13:39:48.381696    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:48.882570    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.381802    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.882430    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.381796    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.882381    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.382632    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.882261    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.381834    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.506604    8300 kubeadm.go:1113] duration metric: took 4.282782773s to wait for elevateKubeSystemPrivileges
	I1014 13:39:52.506630    8300 kubeadm.go:394] duration metric: took 18.855443881s to StartCluster
	I1014 13:39:52.506645    8300 settings.go:142] acquiring lock: {Name:mk543bfe3e4ad3a74f943b74c0d30c5d6649b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:52.506755    8300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:39:52.507116    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/kubeconfig: {Name:mkdfcbe4a3a3bd606687ca36b460845a3c3f03d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:52.507287    8300 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:52.507446    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:39:52.507675    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:52.507742    8300 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 13:39:52.507819    8300 addons.go:69] Setting yakd=true in profile "addons-002422"
	I1014 13:39:52.507833    8300 addons.go:234] Setting addon yakd=true in "addons-002422"
	I1014 13:39:52.507856    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.508310    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.509142    8300 addons.go:69] Setting metrics-server=true in profile "addons-002422"
	I1014 13:39:52.509163    8300 addons.go:234] Setting addon metrics-server=true in "addons-002422"
	I1014 13:39:52.509188    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.509478    8300 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-002422"
	I1014 13:39:52.509491    8300 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-002422"
	I1014 13:39:52.509510    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510208    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510573    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.511231    8300 out.go:177] * Verifying Kubernetes components...
	I1014 13:39:52.510580    8300 addons.go:69] Setting registry=true in profile "addons-002422"
	I1014 13:39:52.511505    8300 addons.go:234] Setting addon registry=true in "addons-002422"
	I1014 13:39:52.511538    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.511949    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510587    8300 addons.go:69] Setting storage-provisioner=true in profile "addons-002422"
	I1014 13:39:52.520596    8300 addons.go:234] Setting addon storage-provisioner=true in "addons-002422"
	I1014 13:39:52.520640    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.521115    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510591    8300 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-002422"
	I1014 13:39:52.532981    8300 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-002422"
	I1014 13:39:52.533369    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510595    8300 addons.go:69] Setting volcano=true in profile "addons-002422"
	I1014 13:39:52.550220    8300 addons.go:234] Setting addon volcano=true in "addons-002422"
	I1014 13:39:52.550275    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510598    8300 addons.go:69] Setting volumesnapshots=true in profile "addons-002422"
	I1014 13:39:52.552126    8300 addons.go:234] Setting addon volumesnapshots=true in "addons-002422"
	I1014 13:39:52.552174    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.552826    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.553954    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:52.565059    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510632    8300 addons.go:69] Setting default-storageclass=true in profile "addons-002422"
	I1014 13:39:52.572346    8300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-002422"
	I1014 13:39:52.572712    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510636    8300 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-002422"
	I1014 13:39:52.590505    8300 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-002422"
	I1014 13:39:52.510639    8300 addons.go:69] Setting cloud-spanner=true in profile "addons-002422"
	I1014 13:39:52.590563    8300 addons.go:234] Setting addon cloud-spanner=true in "addons-002422"
	I1014 13:39:52.590589    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510643    8300 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-002422"
	I1014 13:39:52.590669    8300 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-002422"
	I1014 13:39:52.590688    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.591142    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.593729    8300 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 13:39:52.596547    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.597176    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510647    8300 addons.go:69] Setting ingress=true in profile "addons-002422"
	I1014 13:39:52.606101    8300 addons.go:234] Setting addon ingress=true in "addons-002422"
	I1014 13:39:52.606149    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510650    8300 addons.go:69] Setting gcp-auth=true in profile "addons-002422"
	I1014 13:39:52.606423    8300 mustload.go:65] Loading cluster: addons-002422
	I1014 13:39:52.606578    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:52.606878    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.612375    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510654    8300 addons.go:69] Setting ingress-dns=true in profile "addons-002422"
	I1014 13:39:52.616829    8300 addons.go:234] Setting addon ingress-dns=true in "addons-002422"
	I1014 13:39:52.618350    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.618913    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510659    8300 addons.go:69] Setting inspektor-gadget=true in profile "addons-002422"
	I1014 13:39:52.672064    8300 addons.go:234] Setting addon inspektor-gadget=true in "addons-002422"
	I1014 13:39:52.672106    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.672586    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.674339    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 13:39:52.674367    8300 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 13:39:52.674431    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.685876    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.726733    8300 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1014 13:39:52.729479    8300 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:52.729504    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 13:39:52.729571    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.738262    8300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:39:52.740620    8300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:52.740706    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:39:52.740816    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.759125    8300 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1014 13:39:52.761903    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 13:39:52.761978    8300 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 13:39:52.762079    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.775806    8300 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1014 13:39:52.777050    8300 out.go:177]   - Using image docker.io/registry:2.8.3
	I1014 13:39:52.778795    8300 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 13:39:52.778815    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 13:39:52.778876    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.810423    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1014 13:39:52.810650    8300 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 13:39:52.843322    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 13:39:52.843343    8300 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 13:39:52.843404    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.875932    8300 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-002422"
	I1014 13:39:52.875978    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.876396    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.900015    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 13:39:52.903825    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 13:39:52.905216    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 13:39:52.905782    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:39:52.905912    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:52.907430    8300 addons.go:234] Setting addon default-storageclass=true in "addons-002422"
	I1014 13:39:52.907470    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.907882    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.909830    8300 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 13:39:52.925147    8300 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:52.925170    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 13:39:52.925233    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.960835    8300 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1014 13:39:52.962599    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:52.962627    8300 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1014 13:39:52.962620    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 13:39:52.963349    8300 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:52.963366    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 13:39:52.963429    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.963206    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.970289    8300 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:52.970310    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1014 13:39:52.970378    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.984524    8300 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1014 13:39:52.986563    8300 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 13:39:52.986585    8300 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1014 13:39:52.986665    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.000851    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.002898    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.004669    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.005171    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.007617    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.008851    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 13:39:53.011846    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 13:39:53.012163    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1014 13:39:53.014531    8300 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:53.014551    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 13:39:53.014610    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.017400    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 13:39:53.021178    8300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:53.032819    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 13:39:53.048821    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 13:39:53.050784    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 13:39:53.051154    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.075156    8300 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:53.075179    8300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:39:53.075238    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.118592    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.125109    8300 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 13:39:53.127021    8300 out.go:177]   - Using image docker.io/busybox:stable
	I1014 13:39:53.128267    8300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:53.128285    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 13:39:53.128347    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.149764    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.157878    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.173153    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.173565    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.210375    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.227983    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.228760    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	W1014 13:39:53.232143    8300 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 13:39:53.232171    8300 retry.go:31] will retry after 201.075001ms: ssh: handshake failed: EOF
	I1014 13:39:53.251959    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	W1014 13:39:53.252826    8300 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 13:39:53.252854    8300 retry.go:31] will retry after 290.693438ms: ssh: handshake failed: EOF
	I1014 13:39:53.321325    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 13:39:53.321400    8300 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 13:39:53.465529    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 13:39:53.465600    8300 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 13:39:53.521197    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:53.559324    8300 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 13:39:53.559396    8300 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 13:39:53.588000    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 13:39:53.588071    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 13:39:53.596777    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 13:39:53.596867    8300 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 13:39:53.611889    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:53.691506    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:53.701027    8300 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:53.701101    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1014 13:39:53.706980    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 13:39:53.707058    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 13:39:53.721608    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:53.726010    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:53.727214    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:53.748244    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 13:39:53.748324    8300 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 13:39:53.766870    8300 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:53.766935    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 13:39:53.798159    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:53.807972    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:53.829894    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 13:39:53.829968    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 13:39:53.831870    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:53.831936    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 13:39:53.904458    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 13:39:53.904529    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 13:39:53.915013    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:53.918332    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 13:39:53.918352    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 13:39:53.967109    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:53.967183    8300 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 13:39:53.995912    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:53.999172    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:54.008313    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 13:39:54.008386    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 13:39:54.050016    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 13:39:54.050117    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 13:39:54.150952    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:54.158895    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 13:39:54.158973    8300 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 13:39:54.216839    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 13:39:54.216910    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 13:39:54.298038    8300 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:54.298106    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 13:39:54.425246    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 13:39:54.425318    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 13:39:54.518844    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:54.627793    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 13:39:54.627858    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 13:39:54.710668    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 13:39:54.710741    8300 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 13:39:54.878400    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 13:39:54.878468    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 13:39:55.067607    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 13:39:55.067709    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 13:39:55.102129    8300 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.196317747s)
	I1014 13:39:55.102343    8300 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1014 13:39:55.102246    8300 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.081032362s)
	I1014 13:39:55.103403    8300 node_ready.go:35] waiting up to 6m0s for node "addons-002422" to be "Ready" ...
	I1014 13:39:55.260571    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:55.260641    8300 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 13:39:55.430413    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:56.191149    8300 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-002422" context rescaled to 1 replicas
	I1014 13:39:57.531843    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:39:58.190385    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.578417915s)
	I1014 13:39:58.190449    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.498880501s)
	I1014 13:39:58.190488    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.468813955s)
	I1014 13:39:58.190506    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.46442961s)
	I1014 13:39:58.190521    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.463252488s)
	I1014 13:39:58.190651    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.669384715s)
	I1014 13:39:58.489703    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.691446827s)
	I1014 13:39:59.453746    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.645701166s)
	I1014 13:39:59.453825    8300 addons.go:475] Verifying addon ingress=true in "addons-002422"
	I1014 13:39:59.454100    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.538956517s)
	I1014 13:39:59.454193    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.458209619s)
	I1014 13:39:59.454257    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.455024986s)
	I1014 13:39:59.454450    8300 addons.go:475] Verifying addon registry=true in "addons-002422"
	I1014 13:39:59.454313    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.303303753s)
	I1014 13:39:59.454779    8300 addons.go:475] Verifying addon metrics-server=true in "addons-002422"
	I1014 13:39:59.456620    8300 out.go:177] * Verifying registry addon...
	I1014 13:39:59.456787    8300 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-002422 service yakd-dashboard -n yakd-dashboard
	
	I1014 13:39:59.456791    8300 out.go:177] * Verifying ingress addon...
	I1014 13:39:59.460826    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 13:39:59.461823    8300 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 13:39:59.470853    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.95192719s)
	W1014 13:39:59.470897    8300 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:59.470916    8300 retry.go:31] will retry after 363.594794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:59.475044    8300 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:39:59.475137    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.494186    8300 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 13:39:59.494260    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.618776    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:39:59.674045    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.243504428s)
	I1014 13:39:59.674122    8300 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-002422"
	I1014 13:39:59.677091    8300 out.go:177] * Verifying csi-hostpath-driver addon...
	I1014 13:39:59.680601    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 13:39:59.688079    8300 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:39:59.688106    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.834651    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:59.966110    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.967159    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.182897    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 13:40:00.183061    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:40:00.208777    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:40:00.215133    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.327148    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 13:40:00.348098    8300 addons.go:234] Setting addon gcp-auth=true in "addons-002422"
	I1014 13:40:00.348157    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:40:00.348646    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:40:00.371447    8300 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 13:40:00.371503    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:40:00.390256    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:40:00.465620    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.466509    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.685248    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.963894    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.965769    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.184031    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.465535    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.465945    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.685003    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.964337    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.965880    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.107412    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:02.184392    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.467313    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.468545    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.630593    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.795900604s)
	I1014 13:40:02.630657    8300 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.259182574s)
	I1014 13:40:02.633830    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:40:02.636777    8300 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 13:40:02.639988    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 13:40:02.640015    8300 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 13:40:02.658660    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 13:40:02.658723    8300 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 13:40:02.677247    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:02.677270    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 13:40:02.686328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.697294    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:02.965552    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.966886    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.206469    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.211991    8300 addons.go:475] Verifying addon gcp-auth=true in "addons-002422"
	I1014 13:40:03.215047    8300 out.go:177] * Verifying gcp-auth addon...
	I1014 13:40:03.218536    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 13:40:03.230067    8300 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:40:03.230140    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.464794    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.465865    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.684409    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.722413    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.964910    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.965754    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.184001    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.222224    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.464132    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.465835    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.607095    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:04.684540    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.722062    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.965026    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.966122    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.184931    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.222359    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.464871    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.465614    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.685221    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.722755    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.965721    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.966755    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.184612    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.226391    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.466295    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.466835    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.684279    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.722387    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.964517    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.966650    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.107318    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:07.184262    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.221898    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.464023    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.465715    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.684126    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.722266    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.964971    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.966311    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.185249    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.221712    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.465735    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.466519    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.643449    8300 node_ready.go:49] node "addons-002422" has status "Ready":"True"
	I1014 13:40:08.643475    8300 node_ready.go:38] duration metric: took 13.540005457s for node "addons-002422" to be "Ready" ...
	I1014 13:40:08.643486    8300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:40:08.703756    8300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:08.713250    8300 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:40:08.713277    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.737645    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.027476    8300 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:40:09.027504    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.028465    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.192685    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.235778    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.470625    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.471644    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.686336    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.724017    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.968520    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.969677    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.185754    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.284757    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.464657    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.468881    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.710741    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.770593    8300 pod_ready.go:103] pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:10.794079    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.965702    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.967342    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.185841    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.222030    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.467066    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.469206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.685166    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.711397    8300 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.711422    8300 pod_ready.go:82] duration metric: took 3.007626472s for pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.711456    8300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.716836    8300 pod_ready.go:93] pod "etcd-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.716863    8300 pod_ready.go:82] duration metric: took 5.39615ms for pod "etcd-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.716879    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722185    8300 pod_ready.go:93] pod "kube-apiserver-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.722216    8300 pod_ready.go:82] duration metric: took 5.329212ms for pod "kube-apiserver-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722228    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722840    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.726976    8300 pod_ready.go:93] pod "kube-controller-manager-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.726999    8300 pod_ready.go:82] duration metric: took 4.763003ms for pod "kube-controller-manager-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.727014    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l8cm8" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.732189    8300 pod_ready.go:93] pod "kube-proxy-l8cm8" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.732216    8300 pod_ready.go:82] duration metric: took 5.194263ms for pod "kube-proxy-l8cm8" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.732230    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.965558    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.965832    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.107679    8300 pod_ready.go:93] pod "kube-scheduler-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:12.107702    8300 pod_ready.go:82] duration metric: took 375.464914ms for pod "kube-scheduler-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:12.107715    8300 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:12.187450    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.230307    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.467870    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.469141    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.686181    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.726472    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.968632    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.971453    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.186734    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.288379    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.468833    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.469572    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.691085    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.722796    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.967320    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.968732    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.123790    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:14.188791    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.223008    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.473340    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.476242    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.699309    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.728731    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.971976    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.972724    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.185896    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.222496    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.467542    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.468799    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.686904    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.786475    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.966125    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.967040    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.185797    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.221978    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.466570    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.467388    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.614049    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:16.685524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.722681    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.964684    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.967332    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.186668    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.223243    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.466297    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.466753    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.686530    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.722651    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.970506    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.972546    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.186497    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.222309    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.466524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.467462    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.614247    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:18.686853    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.721947    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.965775    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.966738    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.185186    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.222274    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.464580    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.466106    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.685875    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.721911    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.964671    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.966618    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.189311    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.222657    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.472199    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.473727    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.618297    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:20.686013    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.725067    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.965365    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.967580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.186187    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.222418    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.466257    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.468884    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.695334    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.722420    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.967200    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.968432    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.186507    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.223515    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.467776    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.468823    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.686524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.722555    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.966992    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.968380    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.127726    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:23.186075    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.221928    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.472480    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:23.475893    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.685670    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.722116    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.964675    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:23.966307    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.185949    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.221763    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.466929    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:24.467840    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.686500    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.723357    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.964654    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:24.966885    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.186770    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.224264    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.468835    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:25.472989    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.615515    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:25.685771    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.785531    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.966812    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:25.968108    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.187678    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.221680    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.469421    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.471831    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:26.686271    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.722724    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.967053    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:26.970512    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.185477    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.222867    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.466250    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:27.468931    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.688567    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.722803    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.967279    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:27.968910    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.120174    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:28.185963    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.222451    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.467731    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:28.469892    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.694484    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.723733    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.968844    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.970243    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:29.188565    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.222632    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.465237    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:29.470081    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.686314    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.721820    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.967077    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.968097    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:30.123378    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:30.186144    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.222392    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.467612    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.468606    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:30.685939    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.722292    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.966292    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.966498    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.187119    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.228397    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.469474    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.470647    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.687793    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.722951    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.966353    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.968952    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.191012    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.222245    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.493783    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:32.497471    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.616729    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:32.685402    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.723199    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.015091    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:33.016225    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.186185    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.222121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.464925    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:33.466305    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.685313    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.721937    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.965674    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.966000    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:34.188134    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.222521    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.465926    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:34.466210    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.685495    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.721879    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.967054    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.967585    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.123958    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:35.186850    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.222328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.474600    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.476259    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.688267    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.722698    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.966461    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.968904    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.189405    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.227105    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.467374    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.469201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:36.687641    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.723276    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.966516    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:36.966899    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.186027    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.221954    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.466693    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:37.467661    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.613599    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:37.685542    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.722282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.964239    8300 kapi.go:107] duration metric: took 38.50341222s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 13:40:37.966629    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.185126    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.221942    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.466888    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.686438    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.722778    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.966517    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.187282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.223207    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.466661    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.614336    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:39.691196    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.787651    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.966764    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.186084    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.222886    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.468879    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.687211    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.722327    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.968203    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.186447    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.222002    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.467195    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.614734    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:41.686099    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.722065    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.966707    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.185957    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.222263    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.466761    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.685295    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.722139    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.965961    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.185566    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.222396    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.466206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.685170    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.722217    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.965841    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.119018    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:44.185599    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.222258    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.472427    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.685456    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.722820    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.967061    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.186268    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.222582    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.467133    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.686006    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.722745    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.966725    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.122727    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:46.185705    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.222478    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.470580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.686007    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.723722    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.968777    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.189929    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.230753    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.468104    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.686508    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.735612    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.967384    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.186115    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.222574    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.467660    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.613831    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:48.686770    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.723180    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.967036    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.186235    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.222254    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.466998    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.688335    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.785244    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.965851    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.185449    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.222201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.466967    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.615703    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:50.688793    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.723351    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.967653    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.187020    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.222617    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.466384    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.685756    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.722480    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.966614    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.185552    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.221973    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.467049    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.685690    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.721869    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.966897    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.117849    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:53.185524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.222512    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.466694    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.685231    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.726031    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.966413    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.186863    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.222376    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.466654    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.686527    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.722195    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.967206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.120495    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:55.186636    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.222945    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.466704    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.686587    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.723177    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.967943    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.185945    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.230424    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.467580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.686756    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.729151    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.967597    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.138863    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:57.187768    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.222620    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.467322    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.686340    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.722924    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.973862    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.187396    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.223433    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.471576    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.696283    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.722845    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.969617    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.144349    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:59.187326    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.224062    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.468904    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.685865    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.723300    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.967201    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.187328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.223296    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.469948    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.689322    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.787865    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.967220    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.193121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:01.223356    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.466076    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.616801    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:01.685600    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:01.721282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.968555    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.185627    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:02.221744    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.466461    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.685728    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:02.721776    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.966641    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:03.186738    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:03.222869    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:03.469224    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:03.685717    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:03.721948    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:03.967116    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:04.123040    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:04.186633    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:04.224324    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.466978    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:04.686272    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:04.722968    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.966310    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:05.185602    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:05.221985    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.466644    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:05.688078    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:05.722513    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.968444    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:06.193617    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:06.221743    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:06.466302    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:06.619047    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:06.687322    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:06.723683    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:06.966537    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:07.188570    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:07.222504    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:07.468497    8300 kapi.go:107] duration metric: took 1m8.006680509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 13:41:07.685477    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:07.721750    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:08.186738    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:08.222996    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:08.619965    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:08.686078    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:08.722502    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:09.186302    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:09.222663    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:09.685631    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:09.721891    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:10.186594    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:10.222211    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:10.686002    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:10.722457    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:11.130059    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:11.186102    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:11.222916    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:11.684891    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:11.722141    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:12.193144    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:12.222686    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:12.689014    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:12.788604    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:13.186121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:13.222010    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:13.613504    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:13.687201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:13.722294    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:14.186028    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:14.223174    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:14.693278    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:14.722092    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:15.190467    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:15.224301    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:15.614338    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:15.685666    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:15.722373    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:16.185958    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:16.222136    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:16.685847    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:16.722238    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:17.185998    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:17.221972    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:17.685169    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:17.721825    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:18.123627    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:18.187717    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:18.222369    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:18.685613    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:18.727245    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:19.122743    8300 pod_ready.go:93] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"True"
	I1014 13:41:19.122765    8300 pod_ready.go:82] duration metric: took 1m7.015042829s for pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.122779    8300 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.132407    8300 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace has status "Ready":"True"
	I1014 13:41:19.132479    8300 pod_ready.go:82] duration metric: took 9.69201ms for pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.132516    8300 pod_ready.go:39] duration metric: took 1m10.489017042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:41:19.132561    8300 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:41:19.132608    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:19.132693    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:19.190302    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:19.211811    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:19.211879    8300 cri.go:89] found id: ""
	I1014 13:41:19.211901    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:19.211982    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.220526    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:19.220641    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:19.241773    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:19.276339    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:19.276410    8300 cri.go:89] found id: ""
	I1014 13:41:19.276433    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:19.276519    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.280479    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:19.280599    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:19.351441    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:19.351484    8300 cri.go:89] found id: ""
	I1014 13:41:19.351493    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:19.351555    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.355570    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:19.355656    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:19.413270    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:19.413304    8300 cri.go:89] found id: ""
	I1014 13:41:19.413313    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:19.413381    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.420849    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:19.420934    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:19.483334    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:19.483358    8300 cri.go:89] found id: ""
	I1014 13:41:19.483382    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:19.483446    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.487618    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:19.487717    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:19.551083    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:19.551142    8300 cri.go:89] found id: ""
	I1014 13:41:19.551158    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:19.551215    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.554787    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:19.554860    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:19.598310    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:19.598380    8300 cri.go:89] found id: ""
	I1014 13:41:19.598395    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:19.598462    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.601905    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:19.601926    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:19.665148    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:19.665224    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:19.682907    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:19.682933    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:19.687721    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:19.722161    8300 kapi.go:107] duration metric: took 1m16.503621459s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 13:41:19.725432    8300 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-002422 cluster.
	I1014 13:41:19.728014    8300 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 13:41:19.730593    8300 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 13:41:19.735037    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:19.735065    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:19.820881    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:19.820907    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:19.893884    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:19.893917    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:19.942315    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:19.942345    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:20.038083    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:20.038174    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:20.115418    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.115710    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.115919    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.116169    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.116400    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.116656    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:20.152836    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:20.152914    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:20.186646    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:20.356145    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:20.356173    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:20.412553    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:20.412587    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:20.475037    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:20.475086    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:20.544679    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:20.544710    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:20.545077    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:20.545098    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.545105    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.545119    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.545254    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.545261    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:20.545269    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:20.545282    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:20.686796    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:21.186158    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:21.686118    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:22.186247    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:22.685819    8300 kapi.go:107] duration metric: took 1m23.005216361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 13:41:22.688880    8300 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1014 13:41:22.691714    8300 addons.go:510] duration metric: took 1m30.184000639s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1014 13:41:30.546877    8300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:41:30.560342    8300 api_server.go:72] duration metric: took 1m38.053028566s to wait for apiserver process to appear ...
	I1014 13:41:30.560367    8300 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:41:30.560402    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:30.560461    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:30.601242    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:30.601265    8300 cri.go:89] found id: ""
	I1014 13:41:30.601273    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:30.601326    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.604628    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:30.604697    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:30.644984    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:30.645003    8300 cri.go:89] found id: ""
	I1014 13:41:30.645011    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:30.645062    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.648469    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:30.648536    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:30.697128    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:30.697146    8300 cri.go:89] found id: ""
	I1014 13:41:30.697153    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:30.697205    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.700974    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:30.701035    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:30.740346    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:30.740369    8300 cri.go:89] found id: ""
	I1014 13:41:30.740376    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:30.740429    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.743903    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:30.743969    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:30.783592    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:30.783616    8300 cri.go:89] found id: ""
	I1014 13:41:30.783624    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:30.783677    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.787072    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:30.787151    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:30.823473    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:30.823549    8300 cri.go:89] found id: ""
	I1014 13:41:30.823572    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:30.823651    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.827113    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:30.827178    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:30.865127    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:30.865151    8300 cri.go:89] found id: ""
	I1014 13:41:30.865161    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:30.865215    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.869618    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:30.869641    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:30.883538    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:30.883565    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:31.015963    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:31.015993    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:31.062622    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:31.062651    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:31.105622    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:31.105652    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:31.159051    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:31.159121    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:31.251884    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:31.251917    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:31.324970    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.325225    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.325410    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.325632    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.325817    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.326041    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:31.362482    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:31.362509    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:31.418997    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:31.419027    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:31.467919    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:31.467949    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:31.507864    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:31.507894    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:31.548235    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:31.548260    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:31.618475    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:31.618509    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:31.618562    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:31.618572    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.618580    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.618593    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.618601    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.618611    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:31.618619    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:31.618625    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:41.619260    8300 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 13:41:41.627573    8300 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 13:41:41.628499    8300 api_server.go:141] control plane version: v1.31.1
	I1014 13:41:41.628521    8300 api_server.go:131] duration metric: took 11.068146645s to wait for apiserver health ...
	I1014 13:41:41.628529    8300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:41:41.628550    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:41.628613    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:41.665965    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:41.665995    8300 cri.go:89] found id: ""
	I1014 13:41:41.666002    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:41.666056    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.669487    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:41.669557    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:41.708562    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:41.708585    8300 cri.go:89] found id: ""
	I1014 13:41:41.708593    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:41.708646    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.712178    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:41.712246    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:41.775326    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:41.775347    8300 cri.go:89] found id: ""
	I1014 13:41:41.775355    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:41.775408    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.779511    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:41.779615    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:41.821335    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:41.821356    8300 cri.go:89] found id: ""
	I1014 13:41:41.821363    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:41.821450    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.825710    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:41.825820    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:41.865087    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:41.865108    8300 cri.go:89] found id: ""
	I1014 13:41:41.865116    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:41.865169    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.868563    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:41.868634    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:41.907304    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:41.907327    8300 cri.go:89] found id: ""
	I1014 13:41:41.907335    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:41.907391    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.910857    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:41.910930    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:41.949718    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:41.949744    8300 cri.go:89] found id: ""
	I1014 13:41:41.949752    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:41.949805    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.953310    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:41.953338    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:41.996585    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:41.996615    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:42.050322    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:42.050352    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:42.135143    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.135373    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.135558    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.135780    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.135963    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.136185    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:42.175445    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:42.175490    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:42.232021    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:42.232058    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:42.276952    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:42.276988    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:42.319634    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:42.319660    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:42.396472    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:42.396507    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:42.493405    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:42.493438    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:42.505382    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:42.505410    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:42.639254    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:42.639286    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:42.707467    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:42.707498    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:42.750023    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:42.750051    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:42.750110    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:42.750126    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.750135    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.750146    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.750153    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.750205    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:42.750211    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:42.750218    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:52.761041    8300 system_pods.go:59] 18 kube-system pods found
	I1014 13:41:52.761086    8300 system_pods.go:61] "coredns-7c65d6cfc9-bsnhb" [1719c402-d9cd-43d4-af23-a0333df02866] Running
	I1014 13:41:52.761095    8300 system_pods.go:61] "csi-hostpath-attacher-0" [1e5df543-7e1e-48cb-9857-ad4fa55eecc3] Running
	I1014 13:41:52.761101    8300 system_pods.go:61] "csi-hostpath-resizer-0" [3aacd79a-b371-4b56-bf98-d444c83b9439] Running
	I1014 13:41:52.761128    8300 system_pods.go:61] "csi-hostpathplugin-jrvhl" [cd5f386d-cfc5-4dc6-9ec6-5643a4184f8c] Running
	I1014 13:41:52.761139    8300 system_pods.go:61] "etcd-addons-002422" [055ec4e6-1017-4a4e-be4f-7a71bf7807a4] Running
	I1014 13:41:52.761144    8300 system_pods.go:61] "kindnet-xjsm2" [e0634e3a-e89d-46c3-befa-fa9f56e48570] Running
	I1014 13:41:52.761149    8300 system_pods.go:61] "kube-apiserver-addons-002422" [125f5bf2-9f9b-4b6f-b862-494aa9801820] Running
	I1014 13:41:52.761153    8300 system_pods.go:61] "kube-controller-manager-addons-002422" [a31d6a59-7270-4061-92a4-5065ef2d5330] Running
	I1014 13:41:52.761165    8300 system_pods.go:61] "kube-ingress-dns-minikube" [85b77aed-3ee1-4f75-97b3-879fb269f534] Running
	I1014 13:41:52.761169    8300 system_pods.go:61] "kube-proxy-l8cm8" [c57ee3d5-8ab2-46bd-b68b-80f6c3904d40] Running
	I1014 13:41:52.761174    8300 system_pods.go:61] "kube-scheduler-addons-002422" [1dc281ca-83cd-4762-9821-4e17445ccfea] Running
	I1014 13:41:52.761180    8300 system_pods.go:61] "metrics-server-84c5f94fbc-p68nc" [344d0c1c-bbea-4de6-a079-724c18606d38] Running
	I1014 13:41:52.761185    8300 system_pods.go:61] "nvidia-device-plugin-daemonset-tnngr" [a113dbce-1d95-437b-83fc-dd34499d10e4] Running
	I1014 13:41:52.761210    8300 system_pods.go:61] "registry-66c9cd494c-ddkrt" [091b0f03-dc90-4b2b-bbd3-c73a13edd832] Running
	I1014 13:41:52.761220    8300 system_pods.go:61] "registry-proxy-wjht4" [7f1138a2-5ec8-4c04-a3b7-fdb6c0af33aa] Running
	I1014 13:41:52.761224    8300 system_pods.go:61] "snapshot-controller-56fcc65765-d9p5h" [272bc704-122e-4ffe-a624-e7051cb8832f] Running
	I1014 13:41:52.761229    8300 system_pods.go:61] "snapshot-controller-56fcc65765-pq9xk" [c3e18049-be5f-43ff-a507-33cabb741de9] Running
	I1014 13:41:52.761236    8300 system_pods.go:61] "storage-provisioner" [832679c2-ca50-4565-b1cd-90c63d11988b] Running
	I1014 13:41:52.761243    8300 system_pods.go:74] duration metric: took 11.132707132s to wait for pod list to return data ...
	I1014 13:41:52.761252    8300 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:41:52.763788    8300 default_sa.go:45] found service account: "default"
	I1014 13:41:52.763813    8300 default_sa.go:55] duration metric: took 2.550674ms for default service account to be created ...
	I1014 13:41:52.763822    8300 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:41:52.773891    8300 system_pods.go:86] 18 kube-system pods found
	I1014 13:41:52.773928    8300 system_pods.go:89] "coredns-7c65d6cfc9-bsnhb" [1719c402-d9cd-43d4-af23-a0333df02866] Running
	I1014 13:41:52.773936    8300 system_pods.go:89] "csi-hostpath-attacher-0" [1e5df543-7e1e-48cb-9857-ad4fa55eecc3] Running
	I1014 13:41:52.773941    8300 system_pods.go:89] "csi-hostpath-resizer-0" [3aacd79a-b371-4b56-bf98-d444c83b9439] Running
	I1014 13:41:52.773969    8300 system_pods.go:89] "csi-hostpathplugin-jrvhl" [cd5f386d-cfc5-4dc6-9ec6-5643a4184f8c] Running
	I1014 13:41:52.773981    8300 system_pods.go:89] "etcd-addons-002422" [055ec4e6-1017-4a4e-be4f-7a71bf7807a4] Running
	I1014 13:41:52.773987    8300 system_pods.go:89] "kindnet-xjsm2" [e0634e3a-e89d-46c3-befa-fa9f56e48570] Running
	I1014 13:41:52.773993    8300 system_pods.go:89] "kube-apiserver-addons-002422" [125f5bf2-9f9b-4b6f-b862-494aa9801820] Running
	I1014 13:41:52.773997    8300 system_pods.go:89] "kube-controller-manager-addons-002422" [a31d6a59-7270-4061-92a4-5065ef2d5330] Running
	I1014 13:41:52.774002    8300 system_pods.go:89] "kube-ingress-dns-minikube" [85b77aed-3ee1-4f75-97b3-879fb269f534] Running
	I1014 13:41:52.774006    8300 system_pods.go:89] "kube-proxy-l8cm8" [c57ee3d5-8ab2-46bd-b68b-80f6c3904d40] Running
	I1014 13:41:52.774012    8300 system_pods.go:89] "kube-scheduler-addons-002422" [1dc281ca-83cd-4762-9821-4e17445ccfea] Running
	I1014 13:41:52.774017    8300 system_pods.go:89] "metrics-server-84c5f94fbc-p68nc" [344d0c1c-bbea-4de6-a079-724c18606d38] Running
	I1014 13:41:52.774021    8300 system_pods.go:89] "nvidia-device-plugin-daemonset-tnngr" [a113dbce-1d95-437b-83fc-dd34499d10e4] Running
	I1014 13:41:52.774024    8300 system_pods.go:89] "registry-66c9cd494c-ddkrt" [091b0f03-dc90-4b2b-bbd3-c73a13edd832] Running
	I1014 13:41:52.774028    8300 system_pods.go:89] "registry-proxy-wjht4" [7f1138a2-5ec8-4c04-a3b7-fdb6c0af33aa] Running
	I1014 13:41:52.774054    8300 system_pods.go:89] "snapshot-controller-56fcc65765-d9p5h" [272bc704-122e-4ffe-a624-e7051cb8832f] Running
	I1014 13:41:52.774059    8300 system_pods.go:89] "snapshot-controller-56fcc65765-pq9xk" [c3e18049-be5f-43ff-a507-33cabb741de9] Running
	I1014 13:41:52.774063    8300 system_pods.go:89] "storage-provisioner" [832679c2-ca50-4565-b1cd-90c63d11988b] Running
	I1014 13:41:52.774071    8300 system_pods.go:126] duration metric: took 10.242384ms to wait for k8s-apps to be running ...
	I1014 13:41:52.774078    8300 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:41:52.774154    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:41:52.786726    8300 system_svc.go:56] duration metric: took 12.638293ms WaitForService to wait for kubelet
	I1014 13:41:52.786757    8300 kubeadm.go:582] duration metric: took 2m0.279448218s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:41:52.786776    8300 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:41:52.790212    8300 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 13:41:52.790247    8300 node_conditions.go:123] node cpu capacity is 2
	I1014 13:41:52.790259    8300 node_conditions.go:105] duration metric: took 3.477745ms to run NodePressure ...
	I1014 13:41:52.790270    8300 start.go:241] waiting for startup goroutines ...
	I1014 13:41:52.790278    8300 start.go:246] waiting for cluster config update ...
	I1014 13:41:52.790293    8300 start.go:255] writing updated cluster config ...
	I1014 13:41:52.790588    8300 ssh_runner.go:195] Run: rm -f paused
	I1014 13:41:53.192225    8300 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:41:53.193794    8300 out.go:177] * Done! kubectl is now configured to use "addons-002422" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 13:43:47 addons-002422 crio[969]: time="2024-10-14 13:43:47.541718070Z" level=info msg="Removed pod sandbox: d383f386ecc93958e1322521c4dbeef31daa26ac017a25b2bb5d1ef706166ac7" id=0f575695-35bf-4a06-a9f4-7f65a887462a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.193915572Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-pfhmd/POD" id=abaabee4-b904-4b27-b1b9-111ed933bd80 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.193978218Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.228778918Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-pfhmd Namespace:default ID:fa1f79a1ddb533a93bd1615336d47515834052a4a2f5510ad35fcec5702e0eee UID:f5f83fdb-25be-40d5-9d3f-0e790983e8df NetNS:/var/run/netns/cac0241f-1b6b-48d7-a019-c38afe37bcf4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.228818237Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-pfhmd to CNI network \"kindnet\" (type=ptp)"
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.249390872Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-pfhmd Namespace:default ID:fa1f79a1ddb533a93bd1615336d47515834052a4a2f5510ad35fcec5702e0eee UID:f5f83fdb-25be-40d5-9d3f-0e790983e8df NetNS:/var/run/netns/cac0241f-1b6b-48d7-a019-c38afe37bcf4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.249540131Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-pfhmd for CNI network kindnet (type=ptp)"
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.252019486Z" level=info msg="Ran pod sandbox fa1f79a1ddb533a93bd1615336d47515834052a4a2f5510ad35fcec5702e0eee with infra container: default/hello-world-app-55bf9c44b4-pfhmd/POD" id=abaabee4-b904-4b27-b1b9-111ed933bd80 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.253322557Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5cc71eaa-6e49-4e1c-a8ef-bd4fefb48850 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.253534216Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5cc71eaa-6e49-4e1c-a8ef-bd4fefb48850 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.255982982Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b57293eb-f1dc-494b-ba06-283415c28bec name=/runtime.v1.ImageService/PullImage
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.262984477Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 14 13:45:57 addons-002422 crio[969]: time="2024-10-14 13:45:57.595497646Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.399940790Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=b57293eb-f1dc-494b-ba06-283415c28bec name=/runtime.v1.ImageService/PullImage
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.400547722Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e7dfb89f-6ea9-4128-b341-3af5f145fe33 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.401354391Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e7dfb89f-6ea9-4128-b341-3af5f145fe33 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.402388908Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e129ca63-0d6f-424d-b114-f03c593d7bff name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.403001099Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e129ca63-0d6f-424d-b114-f03c593d7bff name=/runtime.v1.ImageService/ImageStatus
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.403990923Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-pfhmd/hello-world-app" id=6109060d-3776-49af-9938-89ab1834929f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.404081713Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.428848046Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4fae11728443a26f00bc8a2f06c7b01a56fcbf4511fa08206c8044408341461f/merged/etc/passwd: no such file or directory"
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.429019541Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4fae11728443a26f00bc8a2f06c7b01a56fcbf4511fa08206c8044408341461f/merged/etc/group: no such file or directory"
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.484416310Z" level=info msg="Created container 7581e29d62f8c054b9ddfde6c9962368ba0dc67a16ba8b602e15d90fdcda758f: default/hello-world-app-55bf9c44b4-pfhmd/hello-world-app" id=6109060d-3776-49af-9938-89ab1834929f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.485404698Z" level=info msg="Starting container: 7581e29d62f8c054b9ddfde6c9962368ba0dc67a16ba8b602e15d90fdcda758f" id=8ca58d98-f760-4f4d-8c58-6e4667b27a2f name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 13:45:58 addons-002422 crio[969]: time="2024-10-14 13:45:58.505374298Z" level=info msg="Started container" PID=8367 containerID=7581e29d62f8c054b9ddfde6c9962368ba0dc67a16ba8b602e15d90fdcda758f description=default/hello-world-app-55bf9c44b4-pfhmd/hello-world-app id=8ca58d98-f760-4f4d-8c58-6e4667b27a2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa1f79a1ddb533a93bd1615336d47515834052a4a2f5510ad35fcec5702e0eee
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	7581e29d62f8c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app            0                   fa1f79a1ddb53       hello-world-app-55bf9c44b4-pfhmd
	a7421d31433bb       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                      0                   35c7a64d1ead9       nginx
	0a3873b6a1313       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                    0                   0b9b34a5ff6d3       busybox
	0116313dbe028       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             4 minutes ago            Running             controller                 0                   36d43cb197cae       ingress-nginx-controller-5f85ff4588-2wmx4
	cdeac225d13a4       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago            Running             yakd                       0                   089c79855b00d       yakd-dashboard-67d98fc6b-qgctz
	a127dc0621a8c       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     5 minutes ago            Running             nvidia-device-plugin-ctr   0                   ea8114682b7cb       nvidia-device-plugin-daemonset-tnngr
	393d92e1891bd       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner     0                   3c6acef92b0a2       local-path-provisioner-86d989889c-8sdx4
	d7554a361bf43       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              patch                      0                   d9afc9d8f05ea       ingress-nginx-admission-patch-m2gmb
	4bba5dbe92734       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                     0                   11524833ee68f       ingress-nginx-admission-create-wp9ww
	bd13935951c31       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns       0                   f02059b197e72       kube-ingress-dns-minikube
	fa367e6127e27       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               5 minutes ago            Running             cloud-spanner-emulator     0                   f8152ffb6c4a3       cloud-spanner-emulator-5b584cc74-fwt5t
	57a5d29f5a270       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        5 minutes ago            Running             metrics-server             0                   f5e4a601392aa       metrics-server-84c5f94fbc-p68nc
	ada184f93dd5b       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                    0                   daba31545a435       coredns-7c65d6cfc9-bsnhb
	749f7ebdaeaf5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner        0                   1c28befd43fbe       storage-provisioner
	47e55f64e180f       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                           6 minutes ago            Running             kindnet-cni                0                   d3f853ecbc8ad       kindnet-xjsm2
	09ddfab546738       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             6 minutes ago            Running             kube-proxy                 0                   8d6d9e6d67223       kube-proxy-l8cm8
	1028165ec0621       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago            Running             etcd                       0                   04b6b690c81f9       etcd-addons-002422
	62098d1172497       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             6 minutes ago            Running             kube-scheduler             0                   1d827eff7713c       kube-scheduler-addons-002422
	3e4cf70c88184       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             6 minutes ago            Running             kube-controller-manager    0                   76c74a21d4af4       kube-controller-manager-addons-002422
	8b5eecbb1fe82       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             6 minutes ago            Running             kube-apiserver             0                   93ae1f0de0f96       kube-apiserver-addons-002422
	
	
	==> coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] <==
	[INFO] 10.244.0.8:44372 - 27010 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002586112s
	[INFO] 10.244.0.8:44372 - 19887 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000242518s
	[INFO] 10.244.0.8:44372 - 3863 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000153181s
	[INFO] 10.244.0.8:39157 - 45011 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118949s
	[INFO] 10.244.0.8:39157 - 45207 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082117s
	[INFO] 10.244.0.8:34360 - 3935 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062038s
	[INFO] 10.244.0.8:34360 - 4124 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015497s
	[INFO] 10.244.0.8:60782 - 1691 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054736s
	[INFO] 10.244.0.8:60782 - 1519 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059619s
	[INFO] 10.244.0.8:37523 - 7894 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001638307s
	[INFO] 10.244.0.8:37523 - 7433 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00171468s
	[INFO] 10.244.0.8:45406 - 63109 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089871s
	[INFO] 10.244.0.8:45406 - 63265 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062466s
	[INFO] 10.244.0.21:57286 - 54240 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155766s
	[INFO] 10.244.0.21:34847 - 43639 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185936s
	[INFO] 10.244.0.21:41613 - 34227 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002258096s
	[INFO] 10.244.0.21:48605 - 55375 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000334095s
	[INFO] 10.244.0.21:57425 - 41543 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180587s
	[INFO] 10.244.0.21:35525 - 24386 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100668s
	[INFO] 10.244.0.21:48141 - 20043 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0031399s
	[INFO] 10.244.0.21:47055 - 2655 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003316458s
	[INFO] 10.244.0.21:37278 - 54331 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000875584s
	[INFO] 10.244.0.21:36351 - 2614 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00160556s
	[INFO] 10.244.0.24:58786 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000197833s
	[INFO] 10.244.0.24:33286 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127195s
	
	
	==> describe nodes <==
	Name:               addons-002422
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-002422
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=addons-002422
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-002422
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-002422
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:45:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:43:52 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:43:52 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:43:52 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:43:52 +0000   Mon, 14 Oct 2024 13:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-002422
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 216d99f7dc424e599d6a70e41b29e088
	  System UUID:                51be1b84-8333-4024-a862-c04d66a5271b
	  Boot ID:                    c1fb5e99-d9c3-4e62-b114-4b2c9a33f58a
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  default                     cloud-spanner-emulator-5b584cc74-fwt5t       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  default                     hello-world-app-55bf9c44b4-pfhmd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-2wmx4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m
	  kube-system                 coredns-7c65d6cfc9-bsnhb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m7s
	  kube-system                 etcd-addons-002422                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m12s
	  kube-system                 kindnet-xjsm2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m8s
	  kube-system                 kube-apiserver-addons-002422                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-addons-002422        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-proxy-l8cm8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-scheduler-addons-002422                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 metrics-server-84c5f94fbc-p68nc              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m2s
	  kube-system                 nvidia-device-plugin-daemonset-tnngr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  local-path-storage          local-path-provisioner-86d989889c-8sdx4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-qgctz               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m6s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node addons-002422 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node addons-002422 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m19s (x7 over 6m19s)  kubelet          Node addons-002422 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m12s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m12s (x2 over 6m12s)  kubelet          Node addons-002422 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m12s (x2 over 6m12s)  kubelet          Node addons-002422 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m12s (x2 over 6m12s)  kubelet          Node addons-002422 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m8s                   node-controller  Node addons-002422 event: Registered Node addons-002422 in Controller
	  Normal   NodeReady                5m51s                  kubelet          Node addons-002422 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014835] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.475618] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.053479] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015843] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.695923] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.686422] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] <==
	{"level":"info","ts":"2024-10-14T13:39:41.673517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:39:41.674471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T13:39:41.677088Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.677213Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.677265Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.746155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-14T13:39:55.704344Z","caller":"traceutil/trace.go:171","msg":"trace[1703535512] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"125.061507ms","start":"2024-10-14T13:39:55.579265Z","end":"2024-10-14T13:39:55.704327Z","steps":["trace[1703535512] 'process raft request'  (duration: 100.74032ms)","trace[1703535512] 'compare'  (duration: 24.075148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:39:55.709244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.350627ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:55.742502Z","caller":"traceutil/trace.go:171","msg":"trace[1742711474] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:398; }","duration":"137.617824ms","start":"2024-10-14T13:39:55.604866Z","end":"2024-10-14T13:39:55.742484Z","steps":["trace[1742711474] 'agreement among raft nodes before linearized reading'  (duration: 104.309799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:55.709463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.395559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-14T13:39:55.743038Z","caller":"traceutil/trace.go:171","msg":"trace[1917448555] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:400; }","duration":"137.96655ms","start":"2024-10-14T13:39:55.605059Z","end":"2024-10-14T13:39:55.743026Z","steps":["trace[1917448555] 'agreement among raft nodes before linearized reading'  (duration: 104.369401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.559335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.136832ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032554294518971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:399 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-14T13:39:56.567126Z","caller":"traceutil/trace.go:171","msg":"trace[696180709] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"201.90427ms","start":"2024-10-14T13:39:56.365204Z","end":"2024-10-14T13:39:56.567108Z","steps":["trace[696180709] 'process raft request'  (duration: 89.922009ms)","trace[696180709] 'compare'  (duration: 101.057432ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:39:56.567433Z","caller":"traceutil/trace.go:171","msg":"trace[208790586] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"202.128434ms","start":"2024-10-14T13:39:56.365293Z","end":"2024-10-14T13:39:56.567421Z","steps":["trace[208790586] 'process raft request'  (duration: 194.656711ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567731Z","caller":"traceutil/trace.go:171","msg":"trace[2018964444] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"202.214382ms","start":"2024-10-14T13:39:56.365509Z","end":"2024-10-14T13:39:56.567723Z","steps":["trace[2018964444] 'process raft request'  (duration: 194.531967ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567891Z","caller":"traceutil/trace.go:171","msg":"trace[1274013744] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"195.842423ms","start":"2024-10-14T13:39:56.372041Z","end":"2024-10-14T13:39:56.567884Z","steps":["trace[1274013744] 'process raft request'  (duration: 188.046179ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567917Z","caller":"traceutil/trace.go:171","msg":"trace[1855707437] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"195.803604ms","start":"2024-10-14T13:39:56.372108Z","end":"2024-10-14T13:39:56.567912Z","steps":["trace[1855707437] 'process raft request'  (duration: 188.006999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.568946Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.89416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:56.601194Z","caller":"traceutil/trace.go:171","msg":"trace[345260323] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:413; }","duration":"195.413853ms","start":"2024-10-14T13:39:56.405766Z","end":"2024-10-14T13:39:56.601180Z","steps":["trace[345260323] 'agreement among raft nodes before linearized reading'  (duration: 161.872631ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567358Z","caller":"traceutil/trace.go:171","msg":"trace[1219537254] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:420; }","duration":"161.573333ms","start":"2024-10-14T13:39:56.405771Z","end":"2024-10-14T13:39:56.567344Z","steps":["trace[1219537254] 'read index received'  (duration: 6.757042ms)","trace[1219537254] 'applied index is now lower than readState.Index'  (duration: 154.814182ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:39:56.569116Z","caller":"traceutil/trace.go:171","msg":"trace[1142324761] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"143.450437ms","start":"2024-10-14T13:39:56.425656Z","end":"2024-10-14T13:39:56.569106Z","steps":["trace[1142324761] 'process raft request'  (duration: 142.359271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.614133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.027147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-002422\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-10-14T13:39:56.614713Z","caller":"traceutil/trace.go:171","msg":"trace[111109841] range","detail":"{range_begin:/registry/minions/addons-002422; range_end:; response_count:1; response_revision:418; }","duration":"159.613731ms","start":"2024-10-14T13:39:56.455086Z","end":"2024-10-14T13:39:56.614700Z","steps":["trace[111109841] 'agreement among raft nodes before linearized reading'  (duration: 159.001309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.614968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.299264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:56.616701Z","caller":"traceutil/trace.go:171","msg":"trace[633126474] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:418; }","duration":"191.08861ms","start":"2024-10-14T13:39:56.425600Z","end":"2024-10-14T13:39:56.616689Z","steps":["trace[633126474] 'agreement among raft nodes before linearized reading'  (duration: 189.273172ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:45:59 up 28 min,  0 users,  load average: 0.03, 0.57, 0.41
	Linux addons-002422 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] <==
	I1014 13:43:58.444252       1 main.go:300] handling current node
	I1014 13:44:08.449096       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:08.449129       1 main.go:300] handling current node
	I1014 13:44:18.449666       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:18.449699       1 main.go:300] handling current node
	I1014 13:44:28.449117       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:28.449151       1 main.go:300] handling current node
	I1014 13:44:38.449068       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:38.449103       1 main.go:300] handling current node
	I1014 13:44:48.449985       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:48.450019       1 main.go:300] handling current node
	I1014 13:44:58.444557       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:58.444586       1 main.go:300] handling current node
	I1014 13:45:08.443975       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:08.444087       1 main.go:300] handling current node
	I1014 13:45:18.445388       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:18.445423       1 main.go:300] handling current node
	I1014 13:45:28.444692       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:28.444760       1 main.go:300] handling current node
	I1014 13:45:38.448883       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:38.448922       1 main.go:300] handling current node
	I1014 13:45:48.450748       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:48.450781       1 main.go:300] handling current node
	I1014 13:45:58.444037       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:45:58.444074       1 main.go:300] handling current node
	
	
	==> kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] <==
	E1014 13:41:19.024702       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 13:41:19.102837       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1014 13:42:05.216035       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44098: use of closed network connection
	E1014 13:42:05.455514       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44118: use of closed network connection
	I1014 13:42:14.866585       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.229.9"}
	I1014 13:43:03.063091       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1014 13:43:17.774362       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.774425       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.844707       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.844880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.905822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.905940       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.942397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.942432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1014 13:43:18.908004       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1014 13:43:18.942558       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1014 13:43:19.035704       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1014 13:43:31.519357       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1014 13:43:32.552131       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1014 13:43:37.064523       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 13:43:37.356768       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.40.124"}
	I1014 13:45:57.249198       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.166.184"}
	
	
	==> kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] <==
	W1014 13:44:03.814541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:03.814582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:44:22.619399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:22.619441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:44:35.933961       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:35.934003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:44:37.853292       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:37.853334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:44:48.863090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:48.863130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:19.551214       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:19.551254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:22.399240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:22.399279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:29.938406       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:29.938535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:33.936809       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:33.936849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:51.613868       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:51.613913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:45:56.886501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.430619ms"
	I1014 13:45:56.912545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.922441ms"
	I1014 13:45:56.912699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.48µs"
	I1014 13:45:59.206747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.463839ms"
	I1014 13:45:59.206831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.518µs"
	
	
	==> kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] <==
	I1014 13:39:52.294288       1 server_linux.go:66] "Using iptables proxy"
	I1014 13:39:52.394712       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1014 13:39:52.394871       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:39:52.421809       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 13:39:52.421919       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:39:52.425428       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:39:52.439398       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:39:52.439423       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:39:52.440582       1 config.go:199] "Starting service config controller"
	I1014 13:39:52.440648       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:39:52.444864       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:39:52.444953       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:39:52.445458       1 config.go:328] "Starting node config controller"
	I1014 13:39:52.445546       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:39:52.548284       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:39:52.548384       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:39:52.548437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] <==
	W1014 13:39:45.325072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.325155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:39:45.325348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:39:45.325501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.325633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.326335       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:45.326407       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:45.326552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 13:39:45.326605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.326778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 13:39:45.326824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.328929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 13:39:45.328970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 13:39:45.329066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.329201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.329321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:45.329418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:39:46.516822       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 13:44:37 addons-002422 kubelet[1493]: E1014 13:44:37.386027    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913477385787450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:43 addons-002422 kubelet[1493]: I1014 13:44:43.227033    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-fwt5t" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:44:47 addons-002422 kubelet[1493]: I1014 13:44:47.227064    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-tnngr" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:44:47 addons-002422 kubelet[1493]: E1014 13:44:47.306468    1493 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c, memory: /docker/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/system.slice/kubelet.service"
	Oct 14 13:44:47 addons-002422 kubelet[1493]: E1014 13:44:47.388248    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913487388003739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:47 addons-002422 kubelet[1493]: E1014 13:44:47.388443    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913487388003739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:57 addons-002422 kubelet[1493]: E1014 13:44:57.390894    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913497390702142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:57 addons-002422 kubelet[1493]: E1014 13:44:57.390930    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913497390702142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:07 addons-002422 kubelet[1493]: E1014 13:45:07.393615    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913507393401233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:07 addons-002422 kubelet[1493]: E1014 13:45:07.393657    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913507393401233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:17 addons-002422 kubelet[1493]: E1014 13:45:17.396082    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913517395868129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:17 addons-002422 kubelet[1493]: E1014 13:45:17.396120    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913517395868129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:27 addons-002422 kubelet[1493]: E1014 13:45:27.399000    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913527398766059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:27 addons-002422 kubelet[1493]: E1014 13:45:27.399037    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913527398766059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:37 addons-002422 kubelet[1493]: E1014 13:45:37.402094    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913537401849429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:37 addons-002422 kubelet[1493]: E1014 13:45:37.402136    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913537401849429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:41 addons-002422 kubelet[1493]: I1014 13:45:41.227296    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:45:46 addons-002422 kubelet[1493]: I1014 13:45:46.226928    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-fwt5t" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:45:47 addons-002422 kubelet[1493]: E1014 13:45:47.405884    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913547404719659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:47 addons-002422 kubelet[1493]: E1014 13:45:47.405936    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913547404719659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:56 addons-002422 kubelet[1493]: I1014 13:45:56.891800    1493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=139.122120863 podStartE2EDuration="2m19.891784176s" podCreationTimestamp="2024-10-14 13:43:37 +0000 UTC" firstStartedPulling="2024-10-14 13:43:37.622463466 +0000 UTC m=+230.495741694" lastFinishedPulling="2024-10-14 13:43:38.39212678 +0000 UTC m=+231.265405007" observedRunningTime="2024-10-14 13:43:38.904330519 +0000 UTC m=+231.777608755" watchObservedRunningTime="2024-10-14 13:45:56.891784176 +0000 UTC m=+369.765062403"
	Oct 14 13:45:56 addons-002422 kubelet[1493]: I1014 13:45:56.967394    1493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvjj\" (UniqueName: \"kubernetes.io/projected/f5f83fdb-25be-40d5-9d3f-0e790983e8df-kube-api-access-nkvjj\") pod \"hello-world-app-55bf9c44b4-pfhmd\" (UID: \"f5f83fdb-25be-40d5-9d3f-0e790983e8df\") " pod="default/hello-world-app-55bf9c44b4-pfhmd"
	Oct 14 13:45:57 addons-002422 kubelet[1493]: E1014 13:45:57.407799    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913557407559699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:57 addons-002422 kubelet[1493]: E1014 13:45:57.407840    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913557407559699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569145,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:59 addons-002422 kubelet[1493]: I1014 13:45:59.191201    1493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-pfhmd" podStartSLOduration=2.044239665 podStartE2EDuration="3.191183649s" podCreationTimestamp="2024-10-14 13:45:56 +0000 UTC" firstStartedPulling="2024-10-14 13:45:57.25463384 +0000 UTC m=+370.127912068" lastFinishedPulling="2024-10-14 13:45:58.401577824 +0000 UTC m=+371.274856052" observedRunningTime="2024-10-14 13:45:59.190453344 +0000 UTC m=+372.063731572" watchObservedRunningTime="2024-10-14 13:45:59.191183649 +0000 UTC m=+372.064461876"
	
	
	==> storage-provisioner [749f7ebdaeaf50739e47418bda3ae0c2d5a85bd04259b5f9d851861c9e661f83] <==
	I1014 13:40:09.372284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:40:09.406109       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:40:09.406166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:40:09.433641       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:40:09.434046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150!
	I1014 13:40:09.435321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8963a5d4-969c-4353-a393-1ec58810a372", APIVersion:"v1", ResourceVersion:"902", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150 became leader
	I1014 13:40:09.535214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-002422 -n addons-002422
helpers_test.go:261: (dbg) Run:  kubectl --context addons-002422 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wp9ww ingress-nginx-admission-patch-m2gmb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-002422 describe pod ingress-nginx-admission-create-wp9ww ingress-nginx-admission-patch-m2gmb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-002422 describe pod ingress-nginx-admission-create-wp9ww ingress-nginx-admission-patch-m2gmb: exit status 1 (145.659609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wp9ww" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-m2gmb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-002422 describe pod ingress-nginx-admission-create-wp9ww ingress-nginx-admission-patch-m2gmb: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable ingress-dns --alsologtostderr -v=1: (1.064963384s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable ingress --alsologtostderr -v=1: (7.761015492s)
--- FAIL: TestAddons/parallel/Ingress (152.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (346.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.243965ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-p68nc" [344d0c1c-bbea-4de6-a079-724c18606d38] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003839388s
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (93.851541ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 2m46.092066344s

                                                
                                                
** /stderr **
I1014 13:42:38.094971    7544 retry.go:31] will retry after 2.193087801s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (90.112693ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 2m48.375921395s

                                                
                                                
** /stderr **
I1014 13:42:40.378844    7544 retry.go:31] will retry after 4.994750459s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (89.312439ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 2m53.460241018s

                                                
                                                
** /stderr **
I1014 13:42:45.463169    7544 retry.go:31] will retry after 9.953382795s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (181.606583ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 3m3.59584649s

                                                
                                                
** /stderr **
I1014 13:42:55.598582    7544 retry.go:31] will retry after 14.013276062s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (108.877216ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 3m17.718293879s

                                                
                                                
** /stderr **
I1014 13:43:09.721662    7544 retry.go:31] will retry after 9.560109135s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (130.365961ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 3m27.407871381s

                                                
                                                
** /stderr **
I1014 13:43:19.412427    7544 retry.go:31] will retry after 23.201132283s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (105.22685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 3m50.716259938s

                                                
                                                
** /stderr **
I1014 13:43:42.719108    7544 retry.go:31] will retry after 31.024795683s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (91.580083ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 4m21.835930748s

                                                
                                                
** /stderr **
I1014 13:44:13.838985    7544 retry.go:31] will retry after 1m12.492129831s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (88.23672ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 5m34.417953831s

                                                
                                                
** /stderr **
I1014 13:45:26.421255    7544 retry.go:31] will retry after 30.541126886s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (239.253289ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 6m5.198608059s

                                                
                                                
** /stderr **
I1014 13:45:57.202616    7544 retry.go:31] will retry after 1m9.348134386s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (89.056506ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 7m14.637060341s

                                                
                                                
** /stderr **
I1014 13:47:06.640101    7544 retry.go:31] will retry after 1m8.400086786s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-002422 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-002422 top pods -n kube-system: exit status 1 (93.433915ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bsnhb, age: 8m23.131182724s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-002422
helpers_test.go:235: (dbg) docker inspect addons-002422:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c",
	        "Created": "2024-10-14T13:39:26.040660176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-14T13:39:26.200141481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/hosts",
	        "LogPath": "/var/lib/docker/containers/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c/05e13f44fa23211d41ae7b94d00466d20b84537aca8298c4d05c6211297bec8c-json.log",
	        "Name": "/addons-002422",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-002422:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-002422",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b-init/diff:/var/lib/docker/overlay2/0fbe7ab461eb9f9a72ecb1d2c088de9e51a70b12c6d6de37aeffa6e2c5634bdc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4aa3658e12047d4aae80e56b1a737b93933e3445eef34b2f05f9ae1a1f27b38b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-002422",
	                "Source": "/var/lib/docker/volumes/addons-002422/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-002422",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-002422",
	                "name.minikube.sigs.k8s.io": "addons-002422",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0aed6d17065638fabcf4af9629eb2706f94c1b790a82245b3b3aad651ea1da99",
	            "SandboxKey": "/var/run/docker/netns/0aed6d170656",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-002422": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0ff409cb6a6d634b31679069de159a6c4d604dc8e7199db02844607a2ed8ceed",
	                    "EndpointID": "6ecc69181af9927db04c9d672fff7ea2ed76c70627324bcf71e3d5589e3b0324",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-002422",
	                        "05e13f44fa23"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-002422 -n addons-002422
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 logs -n 25: (1.373240239s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-849591 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | download-docker-849591                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-849591                                                                   | download-docker-849591 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-893512   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | binary-mirror-893512                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35277                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-893512                                                                     | binary-mirror-893512   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-002422                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-002422                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-002422 --wait=true                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | -p addons-002422                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-002422 ip                                                                            | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC | 14 Oct 24 13:43 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-002422 ssh curl -s                                                                   | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-002422 ip                                                                            | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:45 UTC | 14 Oct 24 13:45 UTC |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:46 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-002422 ssh cat                                                                       | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:46 UTC |
	|         | /opt/local-path-provisioner/pvc-89f2f068-92eb-4538-ac74-ca3f5159b907_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-002422 addons disable                                                                | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:46 UTC | 14 Oct 24 13:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-002422 addons                                                                        | addons-002422          | jenkins | v1.34.0 | 14 Oct 24 13:47 UTC | 14 Oct 24 13:47 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:39:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:39:01.519189    8300 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:39:01.519388    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.519399    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:39:01.519408    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.519689    8300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:39:01.520212    8300 out.go:352] Setting JSON to false
	I1014 13:39:01.521042    8300 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1292,"bootTime":1728911849,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:39:01.521114    8300 start.go:139] virtualization:  
	I1014 13:39:01.523571    8300 out.go:177] * [addons-002422] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:39:01.525747    8300 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:39:01.525781    8300 notify.go:220] Checking for updates...
	I1014 13:39:01.529853    8300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:39:01.531546    8300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:39:01.532842    8300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:39:01.534232    8300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:39:01.535691    8300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:39:01.537232    8300 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:39:01.564142    8300 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:39:01.564253    8300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:01.620162    8300 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:01.611082175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:01.620265    8300 docker.go:318] overlay module found
	I1014 13:39:01.621991    8300 out.go:177] * Using the docker driver based on user configuration
	I1014 13:39:01.623225    8300 start.go:297] selected driver: docker
	I1014 13:39:01.623240    8300 start.go:901] validating driver "docker" against <nil>
	I1014 13:39:01.623253    8300 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:39:01.623855    8300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:01.686515    8300 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:01.677396922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:01.686715    8300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:39:01.686954    8300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:39:01.688844    8300 out.go:177] * Using Docker driver with root privileges
	I1014 13:39:01.690119    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:01.690189    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:01.690213    8300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:39:01.690301    8300 start.go:340] cluster config:
	{Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:01.691786    8300 out.go:177] * Starting "addons-002422" primary control-plane node in "addons-002422" cluster
	I1014 13:39:01.692864    8300 cache.go:121] Beginning downloading kic base image for docker with crio
	I1014 13:39:01.694108    8300 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:39:01.695781    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:01.695827    8300 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1014 13:39:01.695838    8300 cache.go:56] Caching tarball of preloaded images
	I1014 13:39:01.695840    8300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:39:01.695914    8300 preload.go:172] Found /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 13:39:01.695924    8300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:39:01.696276    8300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json ...
	I1014 13:39:01.696300    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json: {Name:mke32a7b3203164b7b45aacc3b9f08280e6d7f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:01.712115    8300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:39:01.712224    8300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:39:01.712242    8300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1014 13:39:01.712246    8300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1014 13:39:01.712253    8300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1014 13:39:01.712258    8300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1014 13:39:18.423690    8300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1014 13:39:18.423728    8300 cache.go:194] Successfully downloaded all kic artifacts
	I1014 13:39:18.423768    8300 start.go:360] acquireMachinesLock for addons-002422: {Name:mkd84a4fa8b14773f3ba751e5d68c67ef06bd4f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:39:18.423889    8300 start.go:364] duration metric: took 99.971µs to acquireMachinesLock for "addons-002422"
	I1014 13:39:18.423920    8300 start.go:93] Provisioning new machine with config: &{Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:18.424000    8300 start.go:125] createHost starting for "" (driver="docker")
	I1014 13:39:18.426424    8300 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1014 13:39:18.426686    8300 start.go:159] libmachine.API.Create for "addons-002422" (driver="docker")
	I1014 13:39:18.426720    8300 client.go:168] LocalClient.Create starting
	I1014 13:39:18.426812    8300 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem
	I1014 13:39:18.926000    8300 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem
	I1014 13:39:19.558813    8300 cli_runner.go:164] Run: docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 13:39:19.574302    8300 cli_runner.go:211] docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 13:39:19.574389    8300 network_create.go:284] running [docker network inspect addons-002422] to gather additional debugging logs...
	I1014 13:39:19.574411    8300 cli_runner.go:164] Run: docker network inspect addons-002422
	W1014 13:39:19.589486    8300 cli_runner.go:211] docker network inspect addons-002422 returned with exit code 1
	I1014 13:39:19.589523    8300 network_create.go:287] error running [docker network inspect addons-002422]: docker network inspect addons-002422: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-002422 not found
	I1014 13:39:19.589536    8300 network_create.go:289] output of [docker network inspect addons-002422]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-002422 not found
	
	** /stderr **
	I1014 13:39:19.589632    8300 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:19.605243    8300 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400055c310}
	I1014 13:39:19.605285    8300 network_create.go:124] attempt to create docker network addons-002422 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 13:39:19.605337    8300 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-002422 addons-002422
	I1014 13:39:19.672554    8300 network_create.go:108] docker network addons-002422 192.168.49.0/24 created
	I1014 13:39:19.672581    8300 kic.go:121] calculated static IP "192.168.49.2" for the "addons-002422" container
	I1014 13:39:19.672660    8300 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 13:39:19.687481    8300 cli_runner.go:164] Run: docker volume create addons-002422 --label name.minikube.sigs.k8s.io=addons-002422 --label created_by.minikube.sigs.k8s.io=true
	I1014 13:39:19.709849    8300 oci.go:103] Successfully created a docker volume addons-002422
	I1014 13:39:19.709939    8300 cli_runner.go:164] Run: docker run --rm --name addons-002422-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --entrypoint /usr/bin/test -v addons-002422:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1014 13:39:21.906996    8300 cli_runner.go:217] Completed: docker run --rm --name addons-002422-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --entrypoint /usr/bin/test -v addons-002422:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.197000817s)
	I1014 13:39:21.907030    8300 oci.go:107] Successfully prepared a docker volume addons-002422
	I1014 13:39:21.907049    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:21.907067    8300 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 13:39:21.907137    8300 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-002422:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 13:39:25.970946    8300 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-002422:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.063768387s)
	I1014 13:39:25.970975    8300 kic.go:203] duration metric: took 4.063905487s to extract preloaded images to volume ...
	W1014 13:39:25.971118    8300 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 13:39:25.971246    8300 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 13:39:26.025710    8300 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-002422 --name addons-002422 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-002422 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-002422 --network addons-002422 --ip 192.168.49.2 --volume addons-002422:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1014 13:39:26.381604    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Running}}
	I1014 13:39:26.403583    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:26.426817    8300 cli_runner.go:164] Run: docker exec addons-002422 stat /var/lib/dpkg/alternatives/iptables
	I1014 13:39:26.492116    8300 oci.go:144] the created container "addons-002422" has a running status.
	I1014 13:39:26.492143    8300 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa...
	I1014 13:39:27.159451    8300 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 13:39:27.183362    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:27.205625    8300 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 13:39:27.205645    8300 kic_runner.go:114] Args: [docker exec --privileged addons-002422 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 13:39:27.286588    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:27.318504    8300 machine.go:93] provisionDockerMachine start ...
	I1014 13:39:27.318598    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.342014    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.342285    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.342296    8300 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 13:39:27.476294    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-002422
	
	I1014 13:39:27.476315    8300 ubuntu.go:169] provisioning hostname "addons-002422"
	I1014 13:39:27.476377    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.498515    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.498751    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.498763    8300 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-002422 && echo "addons-002422" | sudo tee /etc/hostname
	I1014 13:39:27.649621    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-002422
	
	I1014 13:39:27.649757    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:27.670449    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.670685    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.670702    8300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-002422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-002422/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-002422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:39:27.796523    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:27.796547    8300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19790-2228/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-2228/.minikube}
	I1014 13:39:27.796597    8300 ubuntu.go:177] setting up certificates
	I1014 13:39:27.796609    8300 provision.go:84] configureAuth start
	I1014 13:39:27.796680    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:27.813604    8300 provision.go:143] copyHostCerts
	I1014 13:39:27.813686    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/key.pem (1675 bytes)
	I1014 13:39:27.813805    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/ca.pem (1082 bytes)
	I1014 13:39:27.813863    8300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-2228/.minikube/cert.pem (1123 bytes)
	I1014 13:39:27.813939    8300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem org=jenkins.addons-002422 san=[127.0.0.1 192.168.49.2 addons-002422 localhost minikube]
	I1014 13:39:28.604899    8300 provision.go:177] copyRemoteCerts
	I1014 13:39:28.604976    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:39:28.605031    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:28.621097    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:28.713880    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 13:39:28.737206    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:39:28.760651    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:39:28.783892    8300 provision.go:87] duration metric: took 987.268952ms to configureAuth
	I1014 13:39:28.783928    8300 ubuntu.go:193] setting minikube options for container-runtime
	I1014 13:39:28.784128    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:28.784234    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:28.801092    8300 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:28.801333    8300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:28.801366    8300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:39:29.021776    8300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:39:29.021839    8300 machine.go:96] duration metric: took 1.703315975s to provisionDockerMachine
	I1014 13:39:29.021866    8300 client.go:171] duration metric: took 10.595136953s to LocalClient.Create
	I1014 13:39:29.021891    8300 start.go:167] duration metric: took 10.595203636s to libmachine.API.Create "addons-002422"
	I1014 13:39:29.021923    8300 start.go:293] postStartSetup for "addons-002422" (driver="docker")
	I1014 13:39:29.021950    8300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:39:29.022059    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:39:29.022138    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.039955    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.138030    8300 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:39:29.141073    8300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 13:39:29.141105    8300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1014 13:39:29.141118    8300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1014 13:39:29.141125    8300 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1014 13:39:29.141135    8300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2228/.minikube/addons for local assets ...
	I1014 13:39:29.141205    8300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2228/.minikube/files for local assets ...
	I1014 13:39:29.141241    8300 start.go:296] duration metric: took 119.286948ms for postStartSetup
	I1014 13:39:29.141939    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:29.161897    8300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/config.json ...
	I1014 13:39:29.162248    8300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:39:29.162301    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.179425    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.273204    8300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 13:39:29.277388    8300 start.go:128] duration metric: took 10.853372344s to createHost
	I1014 13:39:29.277420    8300 start.go:83] releasing machines lock for "addons-002422", held for 10.853516426s
	I1014 13:39:29.277488    8300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-002422
	I1014 13:39:29.292658    8300 ssh_runner.go:195] Run: cat /version.json
	I1014 13:39:29.292711    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.293039    8300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:39:29.293123    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:29.309343    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.318878    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:29.400173    8300 ssh_runner.go:195] Run: systemctl --version
	I1014 13:39:29.535275    8300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:39:29.682522    8300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 13:39:29.686453    8300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:29.706862    8300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1014 13:39:29.706974    8300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:29.734354    8300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1014 13:39:29.734375    8300 start.go:495] detecting cgroup driver to use...
	I1014 13:39:29.734406    8300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 13:39:29.734454    8300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:39:29.749192    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:39:29.760184    8300 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:39:29.760246    8300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:39:29.774112    8300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:39:29.788395    8300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:39:29.880801    8300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:39:29.972415    8300 docker.go:233] disabling docker service ...
	I1014 13:39:29.972481    8300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:39:29.992061    8300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:39:30.011825    8300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:39:30.109186    8300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:39:30.209178    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:39:30.221080    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:39:30.237424    8300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:39:30.237513    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.247070    8300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:39:30.247171    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.256865    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.266642    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.277380    8300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:39:30.286952    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.297328    8300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.313264    8300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:30.323340    8300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:39:30.331728    8300 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:39:30.331833    8300 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:39:30.345487    8300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:39:30.354185    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:30.443199    8300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:39:30.559773    8300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:39:30.559901    8300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:39:30.563352    8300 start.go:563] Will wait 60s for crictl version
	I1014 13:39:30.563470    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:39:30.567069    8300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:39:30.608005    8300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1014 13:39:30.608183    8300 ssh_runner.go:195] Run: crio --version
	I1014 13:39:30.644625    8300 ssh_runner.go:195] Run: crio --version
	I1014 13:39:30.685006    8300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1014 13:39:30.686361    8300 cli_runner.go:164] Run: docker network inspect addons-002422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:30.703035    8300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 13:39:30.706678    8300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.717505    8300 kubeadm.go:883] updating cluster {Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:39:30.717623    8300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:30.717682    8300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.792087    8300 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:30.792116    8300 crio.go:433] Images already preloaded, skipping extraction
	I1014 13:39:30.792174    8300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.827792    8300 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:30.827816    8300 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:39:30.827824    8300 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1014 13:39:30.827955    8300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-002422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:39:30.828039    8300 ssh_runner.go:195] Run: crio config
	I1014 13:39:30.874149    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:30.874171    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:30.874181    8300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:39:30.874224    8300 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-002422 NodeName:addons-002422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:39:30.874361    8300 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-002422"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:39:30.874429    8300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:39:30.882973    8300 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:39:30.883071    8300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 13:39:30.892223    8300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 13:39:30.909506    8300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:39:30.926769    8300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1014 13:39:30.944321    8300 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 13:39:30.947745    8300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.958400    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:31.045686    8300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:31.059522    8300 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422 for IP: 192.168.49.2
	I1014 13:39:31.059593    8300 certs.go:194] generating shared ca certs ...
	I1014 13:39:31.059622    8300 certs.go:226] acquiring lock for ca certs: {Name:mk06df15dc793252bd5ffa6daa3e5f2510797850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.059783    8300 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key
	I1014 13:39:31.279549    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt ...
	I1014 13:39:31.279582    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt: {Name:mkf2e09cdeaf406bd5dbfb6df51fda19d11b3a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.279812    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key ...
	I1014 13:39:31.279826    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key: {Name:mkbb0140f8b18956b3e337fe5d9dac3444c3cff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.279917    8300 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key
	I1014 13:39:32.102633    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt ...
	I1014 13:39:32.102667    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt: {Name:mk87e80ab56810a443caa4380c01f4fa59f6347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.102908    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key ...
	I1014 13:39:32.102928    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key: {Name:mk3f83de2f8ad31643196f738fbd59675505d818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.103014    8300 certs.go:256] generating profile certs ...
	I1014 13:39:32.103079    8300 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key
	I1014 13:39:32.103097    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt with IP's: []
	I1014 13:39:32.527349    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt ...
	I1014 13:39:32.527383    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: {Name:mk7e896bcb1761dc92896d4828a4f921b266d096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.527596    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key ...
	I1014 13:39:32.527612    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.key: {Name:mk2471b3e7dfa66ccab07ee70fc530ef48ac5f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.527706    8300 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0
	I1014 13:39:32.527726    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 13:39:33.097055    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 ...
	I1014 13:39:33.097092    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0: {Name:mk0b396ed04de990231c7535e37286cbdddbeccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.097278    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0 ...
	I1014 13:39:33.097292    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0: {Name:mkeaac9f624665f13ab091190d99656a19ad24ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.097375    8300 certs.go:381] copying /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt.17286ce0 -> /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt
	I1014 13:39:33.097463    8300 certs.go:385] copying /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key.17286ce0 -> /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key
	I1014 13:39:33.097517    8300 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key
	I1014 13:39:33.097536    8300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt with IP's: []
	I1014 13:39:33.368114    8300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt ...
	I1014 13:39:33.368146    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt: {Name:mk617006c2b50b41e3bf3976f48c6e2173294ddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.368332    8300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key ...
	I1014 13:39:33.368345    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key: {Name:mk2d2071f6a997e883c7ef5cbfc1c62f114134be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.368542    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:39:33.368583    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/ca.pem (1082 bytes)
	I1014 13:39:33.368611    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:39:33.368640    8300 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2228/.minikube/certs/key.pem (1675 bytes)
	I1014 13:39:33.369276    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:39:33.396664    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 13:39:33.421787    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:39:33.446268    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 13:39:33.470393    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 13:39:33.498381    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:39:33.522109    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:39:33.545983    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 13:39:33.569791    8300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2228/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:39:33.594911    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:39:33.613470    8300 ssh_runner.go:195] Run: openssl version
	I1014 13:39:33.618883    8300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:39:33.628463    8300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.631697    8300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.631783    8300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.638573    8300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:39:33.647929    8300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:39:33.651145    8300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:39:33.651190    8300 kubeadm.go:392] StartCluster: {Name:addons-002422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-002422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:33.651267    8300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:39:33.651321    8300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:39:33.690330    8300 cri.go:89] found id: ""
	I1014 13:39:33.690396    8300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:39:33.699200    8300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:39:33.707904    8300 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 13:39:33.707968    8300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:39:33.716528    8300 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:39:33.716548    8300 kubeadm.go:157] found existing configuration files:
	
	I1014 13:39:33.716598    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:39:33.725169    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:39:33.725233    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:39:33.733899    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:39:33.742056    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:39:33.742158    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:39:33.750333    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.758992    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:39:33.759078    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.767897    8300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:39:33.776528    8300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:39:33.776599    8300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:39:33.785190    8300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 13:39:33.823923    8300 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:39:33.824199    8300 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:39:33.844918    8300 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1014 13:39:33.845063    8300 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1014 13:39:33.845122    8300 kubeadm.go:310] OS: Linux
	I1014 13:39:33.845223    8300 kubeadm.go:310] CGROUPS_CPU: enabled
	I1014 13:39:33.845287    8300 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1014 13:39:33.845338    8300 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1014 13:39:33.845390    8300 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1014 13:39:33.845445    8300 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1014 13:39:33.845501    8300 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1014 13:39:33.845550    8300 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1014 13:39:33.845602    8300 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1014 13:39:33.845653    8300 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1014 13:39:33.916605    8300 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:39:33.916807    8300 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:39:33.916916    8300 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:39:33.925110    8300 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:39:33.930119    8300 out.go:235]   - Generating certificates and keys ...
	I1014 13:39:33.930214    8300 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:39:33.930332    8300 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:39:34.198508    8300 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:39:34.622750    8300 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:39:34.805332    8300 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:39:35.248566    8300 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:39:35.947522    8300 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:39:35.947821    8300 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-002422 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.290501    8300 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:39:36.295957    8300 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-002422 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.540393    8300 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:39:36.985910    8300 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:39:37.131122    8300 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:39:37.131491    8300 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:39:37.561848    8300 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:39:38.018910    8300 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:39:38.921446    8300 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:39:39.097017    8300 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:39:39.398377    8300 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:39:39.399030    8300 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:39:39.401991    8300 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:39:39.403670    8300 out.go:235]   - Booting up control plane ...
	I1014 13:39:39.403765    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:39:39.403841    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:39:39.404568    8300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:39:39.414705    8300 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:39:39.420512    8300 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:39:39.420912    8300 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:39:39.515212    8300 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:39:39.515331    8300 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:39:40.517315    8300 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001877986s
	I1014 13:39:40.517407    8300 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:39:46.520416    8300 kubeadm.go:310] [api-check] The API server is healthy after 6.001293176s
	I1014 13:39:46.537487    8300 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:39:46.551261    8300 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:39:46.577727    8300 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:39:46.577920    8300 kubeadm.go:310] [mark-control-plane] Marking the node addons-002422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:39:46.587508    8300 kubeadm.go:310] [bootstrap-token] Using token: p0ldfg.l4f8resh3yr04gj6
	I1014 13:39:46.588848    8300 out.go:235]   - Configuring RBAC rules ...
	I1014 13:39:46.588969    8300 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:39:46.594842    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:39:46.605038    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:39:46.610706    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:39:46.615031    8300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:39:46.619515    8300 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:39:46.925078    8300 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:39:47.354345    8300 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:39:47.924387    8300 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:39:47.925552    8300 kubeadm.go:310] 
	I1014 13:39:47.925643    8300 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:39:47.925659    8300 kubeadm.go:310] 
	I1014 13:39:47.925756    8300 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:39:47.925765    8300 kubeadm.go:310] 
	I1014 13:39:47.925801    8300 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:39:47.925880    8300 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:39:47.925939    8300 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:39:47.925943    8300 kubeadm.go:310] 
	I1014 13:39:47.926001    8300 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:39:47.926005    8300 kubeadm.go:310] 
	I1014 13:39:47.926057    8300 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:39:47.926061    8300 kubeadm.go:310] 
	I1014 13:39:47.926116    8300 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:39:47.926206    8300 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:39:47.926279    8300 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:39:47.926283    8300 kubeadm.go:310] 
	I1014 13:39:47.926378    8300 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:39:47.926460    8300 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:39:47.926464    8300 kubeadm.go:310] 
	I1014 13:39:47.926553    8300 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0ldfg.l4f8resh3yr04gj6 \
	I1014 13:39:47.926662    8300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7f4316051a451070b62e5ea00267a1d9ae2a3434782771c12eaedf3124887c0a \
	I1014 13:39:47.926684    8300 kubeadm.go:310] 	--control-plane 
	I1014 13:39:47.926688    8300 kubeadm.go:310] 
	I1014 13:39:47.926779    8300 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:39:47.926783    8300 kubeadm.go:310] 
	I1014 13:39:47.926870    8300 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0ldfg.l4f8resh3yr04gj6 \
	I1014 13:39:47.926979    8300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7f4316051a451070b62e5ea00267a1d9ae2a3434782771c12eaedf3124887c0a 
	I1014 13:39:47.929366    8300 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1014 13:39:47.929552    8300 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:39:47.929594    8300 cni.go:84] Creating CNI manager for ""
	I1014 13:39:47.929630    8300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:39:47.931645    8300 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:39:47.932940    8300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:39:47.936529    8300 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:39:47.936549    8300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:39:47.953687    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:39:48.223733    8300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:39:48.223882    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:48.223931    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-002422 minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=addons-002422 minikube.k8s.io/primary=true
	I1014 13:39:48.381596    8300 ops.go:34] apiserver oom_adj: -16
	I1014 13:39:48.381696    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:48.882570    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.381802    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.882430    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.381796    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.882381    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.382632    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.882261    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.381834    8300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.506604    8300 kubeadm.go:1113] duration metric: took 4.282782773s to wait for elevateKubeSystemPrivileges
	I1014 13:39:52.506630    8300 kubeadm.go:394] duration metric: took 18.855443881s to StartCluster
	I1014 13:39:52.506645    8300 settings.go:142] acquiring lock: {Name:mk543bfe3e4ad3a74f943b74c0d30c5d6649b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:52.506755    8300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:39:52.507116    8300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/kubeconfig: {Name:mkdfcbe4a3a3bd606687ca36b460845a3c3f03d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:52.507287    8300 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:52.507446    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:39:52.507675    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:52.507742    8300 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 13:39:52.507819    8300 addons.go:69] Setting yakd=true in profile "addons-002422"
	I1014 13:39:52.507833    8300 addons.go:234] Setting addon yakd=true in "addons-002422"
	I1014 13:39:52.507856    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.508310    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.509142    8300 addons.go:69] Setting metrics-server=true in profile "addons-002422"
	I1014 13:39:52.509163    8300 addons.go:234] Setting addon metrics-server=true in "addons-002422"
	I1014 13:39:52.509188    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.509478    8300 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-002422"
	I1014 13:39:52.509491    8300 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-002422"
	I1014 13:39:52.509510    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510208    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510573    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.511231    8300 out.go:177] * Verifying Kubernetes components...
	I1014 13:39:52.510580    8300 addons.go:69] Setting registry=true in profile "addons-002422"
	I1014 13:39:52.511505    8300 addons.go:234] Setting addon registry=true in "addons-002422"
	I1014 13:39:52.511538    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.511949    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510587    8300 addons.go:69] Setting storage-provisioner=true in profile "addons-002422"
	I1014 13:39:52.520596    8300 addons.go:234] Setting addon storage-provisioner=true in "addons-002422"
	I1014 13:39:52.520640    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.521115    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510591    8300 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-002422"
	I1014 13:39:52.532981    8300 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-002422"
	I1014 13:39:52.533369    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510595    8300 addons.go:69] Setting volcano=true in profile "addons-002422"
	I1014 13:39:52.550220    8300 addons.go:234] Setting addon volcano=true in "addons-002422"
	I1014 13:39:52.550275    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510598    8300 addons.go:69] Setting volumesnapshots=true in profile "addons-002422"
	I1014 13:39:52.552126    8300 addons.go:234] Setting addon volumesnapshots=true in "addons-002422"
	I1014 13:39:52.552174    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.552826    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.553954    8300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:52.565059    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510632    8300 addons.go:69] Setting default-storageclass=true in profile "addons-002422"
	I1014 13:39:52.572346    8300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-002422"
	I1014 13:39:52.572712    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510636    8300 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-002422"
	I1014 13:39:52.590505    8300 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-002422"
	I1014 13:39:52.510639    8300 addons.go:69] Setting cloud-spanner=true in profile "addons-002422"
	I1014 13:39:52.590563    8300 addons.go:234] Setting addon cloud-spanner=true in "addons-002422"
	I1014 13:39:52.590589    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510643    8300 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-002422"
	I1014 13:39:52.590669    8300 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-002422"
	I1014 13:39:52.590688    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.591142    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.593729    8300 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 13:39:52.596547    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.597176    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510647    8300 addons.go:69] Setting ingress=true in profile "addons-002422"
	I1014 13:39:52.606101    8300 addons.go:234] Setting addon ingress=true in "addons-002422"
	I1014 13:39:52.606149    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.510650    8300 addons.go:69] Setting gcp-auth=true in profile "addons-002422"
	I1014 13:39:52.606423    8300 mustload.go:65] Loading cluster: addons-002422
	I1014 13:39:52.606578    8300 config.go:182] Loaded profile config "addons-002422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:52.606878    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.612375    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510654    8300 addons.go:69] Setting ingress-dns=true in profile "addons-002422"
	I1014 13:39:52.616829    8300 addons.go:234] Setting addon ingress-dns=true in "addons-002422"
	I1014 13:39:52.618350    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.618913    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.510659    8300 addons.go:69] Setting inspektor-gadget=true in profile "addons-002422"
	I1014 13:39:52.672064    8300 addons.go:234] Setting addon inspektor-gadget=true in "addons-002422"
	I1014 13:39:52.672106    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.672586    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.674339    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 13:39:52.674367    8300 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 13:39:52.674431    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.685876    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.726733    8300 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1014 13:39:52.729479    8300 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:52.729504    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 13:39:52.729571    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.738262    8300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:39:52.740620    8300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:52.740706    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:39:52.740816    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.759125    8300 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1014 13:39:52.761903    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 13:39:52.761978    8300 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 13:39:52.762079    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.775806    8300 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1014 13:39:52.777050    8300 out.go:177]   - Using image docker.io/registry:2.8.3
	I1014 13:39:52.778795    8300 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 13:39:52.778815    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 13:39:52.778876    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.810423    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1014 13:39:52.810650    8300 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 13:39:52.843322    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 13:39:52.843343    8300 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 13:39:52.843404    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.875932    8300 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-002422"
	I1014 13:39:52.875978    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.876396    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.900015    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 13:39:52.903825    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 13:39:52.905216    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 13:39:52.905782    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:39:52.905912    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:52.907430    8300 addons.go:234] Setting addon default-storageclass=true in "addons-002422"
	I1014 13:39:52.907470    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.907882    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:39:52.909830    8300 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 13:39:52.925147    8300 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:52.925170    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 13:39:52.925233    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.960835    8300 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1014 13:39:52.962599    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:52.962627    8300 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1014 13:39:52.962620    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 13:39:52.963349    8300 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:52.963366    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 13:39:52.963429    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.963206    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:39:52.970289    8300 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:52.970310    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1014 13:39:52.970378    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:52.984524    8300 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1014 13:39:52.986563    8300 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 13:39:52.986585    8300 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1014 13:39:52.986665    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.000851    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.002898    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.004669    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.005171    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.007617    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.008851    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 13:39:53.011846    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 13:39:53.012163    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1014 13:39:53.014531    8300 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:53.014551    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 13:39:53.014610    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.017400    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 13:39:53.021178    8300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:53.032819    8300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 13:39:53.048821    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 13:39:53.050784    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 13:39:53.051154    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.075156    8300 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:53.075179    8300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:39:53.075238    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.118592    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.125109    8300 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 13:39:53.127021    8300 out.go:177]   - Using image docker.io/busybox:stable
	I1014 13:39:53.128267    8300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:53.128285    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 13:39:53.128347    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:39:53.149764    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.157878    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.173153    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.173565    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.210375    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.227983    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:39:53.228760    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	W1014 13:39:53.232143    8300 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 13:39:53.232171    8300 retry.go:31] will retry after 201.075001ms: ssh: handshake failed: EOF
	I1014 13:39:53.251959    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	W1014 13:39:53.252826    8300 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 13:39:53.252854    8300 retry.go:31] will retry after 290.693438ms: ssh: handshake failed: EOF
	I1014 13:39:53.321325    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 13:39:53.321400    8300 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 13:39:53.465529    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 13:39:53.465600    8300 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 13:39:53.521197    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:53.559324    8300 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 13:39:53.559396    8300 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 13:39:53.588000    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 13:39:53.588071    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 13:39:53.596777    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 13:39:53.596867    8300 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 13:39:53.611889    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:53.691506    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:53.701027    8300 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:53.701101    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1014 13:39:53.706980    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 13:39:53.707058    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 13:39:53.721608    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:53.726010    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:53.727214    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:53.748244    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 13:39:53.748324    8300 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 13:39:53.766870    8300 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:53.766935    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 13:39:53.798159    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:53.807972    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:53.829894    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 13:39:53.829968    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 13:39:53.831870    8300 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:53.831936    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 13:39:53.904458    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 13:39:53.904529    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 13:39:53.915013    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:53.918332    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 13:39:53.918352    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 13:39:53.967109    8300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:53.967183    8300 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 13:39:53.995912    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:53.999172    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:54.008313    8300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 13:39:54.008386    8300 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 13:39:54.050016    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 13:39:54.050117    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 13:39:54.150952    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:54.158895    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 13:39:54.158973    8300 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 13:39:54.216839    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 13:39:54.216910    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 13:39:54.298038    8300 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:54.298106    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 13:39:54.425246    8300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 13:39:54.425318    8300 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 13:39:54.518844    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:54.627793    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 13:39:54.627858    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 13:39:54.710668    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 13:39:54.710741    8300 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 13:39:54.878400    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 13:39:54.878468    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 13:39:55.067607    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 13:39:55.067709    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 13:39:55.102129    8300 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.196317747s)
	I1014 13:39:55.102343    8300 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1014 13:39:55.102246    8300 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.081032362s)
	I1014 13:39:55.103403    8300 node_ready.go:35] waiting up to 6m0s for node "addons-002422" to be "Ready" ...
	I1014 13:39:55.260571    8300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:55.260641    8300 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 13:39:55.430413    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:56.191149    8300 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-002422" context rescaled to 1 replicas
	I1014 13:39:57.531843    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:39:58.190385    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.578417915s)
	I1014 13:39:58.190449    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.498880501s)
	I1014 13:39:58.190488    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.468813955s)
	I1014 13:39:58.190506    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.46442961s)
	I1014 13:39:58.190521    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.463252488s)
	I1014 13:39:58.190651    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.669384715s)
	I1014 13:39:58.489703    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.691446827s)
	I1014 13:39:59.453746    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.645701166s)
	I1014 13:39:59.453825    8300 addons.go:475] Verifying addon ingress=true in "addons-002422"
	I1014 13:39:59.454100    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.538956517s)
	I1014 13:39:59.454193    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.458209619s)
	I1014 13:39:59.454257    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.455024986s)
	I1014 13:39:59.454450    8300 addons.go:475] Verifying addon registry=true in "addons-002422"
	I1014 13:39:59.454313    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.303303753s)
	I1014 13:39:59.454779    8300 addons.go:475] Verifying addon metrics-server=true in "addons-002422"
	I1014 13:39:59.456620    8300 out.go:177] * Verifying registry addon...
	I1014 13:39:59.456787    8300 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-002422 service yakd-dashboard -n yakd-dashboard
	
	I1014 13:39:59.456791    8300 out.go:177] * Verifying ingress addon...
	I1014 13:39:59.460826    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 13:39:59.461823    8300 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 13:39:59.470853    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.95192719s)
	W1014 13:39:59.470897    8300 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:59.470916    8300 retry.go:31] will retry after 363.594794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:59.475044    8300 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:39:59.475137    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.494186    8300 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 13:39:59.494260    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.618776    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:39:59.674045    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.243504428s)
	I1014 13:39:59.674122    8300 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-002422"
	I1014 13:39:59.677091    8300 out.go:177] * Verifying csi-hostpath-driver addon...
	I1014 13:39:59.680601    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 13:39:59.688079    8300 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:39:59.688106    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.834651    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:59.966110    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.967159    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.182897    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 13:40:00.183061    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:40:00.208777    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:40:00.215133    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.327148    8300 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 13:40:00.348098    8300 addons.go:234] Setting addon gcp-auth=true in "addons-002422"
	I1014 13:40:00.348157    8300 host.go:66] Checking if "addons-002422" exists ...
	I1014 13:40:00.348646    8300 cli_runner.go:164] Run: docker container inspect addons-002422 --format={{.State.Status}}
	I1014 13:40:00.371447    8300 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 13:40:00.371503    8300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-002422
	I1014 13:40:00.390256    8300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/addons-002422/id_rsa Username:docker}
	I1014 13:40:00.465620    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.466509    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.685248    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.963894    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.965769    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.184031    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.465535    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.465945    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.685003    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.964337    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.965880    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.107412    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:02.184392    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.467313    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.468545    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.630593    8300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.795900604s)
	I1014 13:40:02.630657    8300 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.259182574s)
	I1014 13:40:02.633830    8300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:40:02.636777    8300 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 13:40:02.639988    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 13:40:02.640015    8300 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 13:40:02.658660    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 13:40:02.658723    8300 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 13:40:02.677247    8300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:02.677270    8300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 13:40:02.686328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.697294    8300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:02.965552    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.966886    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.206469    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.211991    8300 addons.go:475] Verifying addon gcp-auth=true in "addons-002422"
	I1014 13:40:03.215047    8300 out.go:177] * Verifying gcp-auth addon...
	I1014 13:40:03.218536    8300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 13:40:03.230067    8300 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:40:03.230140    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.464794    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.465865    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.684409    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.722413    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.964910    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.965754    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.184001    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.222224    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.464132    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.465835    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.607095    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:04.684540    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.722062    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.965026    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.966122    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.184931    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.222359    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.464871    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.465614    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.685221    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.722755    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.965721    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.966755    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.184612    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.226391    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.466295    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.466835    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.684279    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.722387    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.964517    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.966650    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.107318    8300 node_ready.go:53] node "addons-002422" has status "Ready":"False"
	I1014 13:40:07.184262    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.221898    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.464023    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.465715    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.684126    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.722266    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.964971    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.966311    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.185249    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.221712    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.465735    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.466519    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.643449    8300 node_ready.go:49] node "addons-002422" has status "Ready":"True"
	I1014 13:40:08.643475    8300 node_ready.go:38] duration metric: took 13.540005457s for node "addons-002422" to be "Ready" ...
	I1014 13:40:08.643486    8300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:40:08.703756    8300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:08.713250    8300 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:40:08.713277    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.737645    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.027476    8300 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:40:09.027504    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.028465    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.192685    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.235778    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.470625    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.471644    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.686336    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.724017    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.968520    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.969677    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.185754    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.284757    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.464657    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.468881    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.710741    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.770593    8300 pod_ready.go:103] pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:10.794079    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.965702    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.967342    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.185841    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.222030    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.467066    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.469206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.685166    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.711397    8300 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.711422    8300 pod_ready.go:82] duration metric: took 3.007626472s for pod "coredns-7c65d6cfc9-bsnhb" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.711456    8300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.716836    8300 pod_ready.go:93] pod "etcd-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.716863    8300 pod_ready.go:82] duration metric: took 5.39615ms for pod "etcd-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.716879    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722185    8300 pod_ready.go:93] pod "kube-apiserver-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.722216    8300 pod_ready.go:82] duration metric: took 5.329212ms for pod "kube-apiserver-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722228    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.722840    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.726976    8300 pod_ready.go:93] pod "kube-controller-manager-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.726999    8300 pod_ready.go:82] duration metric: took 4.763003ms for pod "kube-controller-manager-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.727014    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l8cm8" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.732189    8300 pod_ready.go:93] pod "kube-proxy-l8cm8" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.732216    8300 pod_ready.go:82] duration metric: took 5.194263ms for pod "kube-proxy-l8cm8" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.732230    8300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.965558    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.965832    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.107679    8300 pod_ready.go:93] pod "kube-scheduler-addons-002422" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:12.107702    8300 pod_ready.go:82] duration metric: took 375.464914ms for pod "kube-scheduler-addons-002422" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:12.107715    8300 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:12.187450    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.230307    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.467870    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.469141    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.686181    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.726472    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.968632    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.971453    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.186734    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.288379    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.468833    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.469572    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.691085    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.722796    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.967320    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.968732    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.123790    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:14.188791    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.223008    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.473340    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.476242    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.699309    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.728731    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.971976    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.972724    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.185896    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.222496    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.467542    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.468799    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.686904    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.786475    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.966125    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.967040    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.185797    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.221978    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.466570    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.467388    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.614049    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:16.685524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.722681    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.964684    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.967332    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.186668    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.223243    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.466297    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.466753    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.686530    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.722651    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.970506    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.972546    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.186497    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.222309    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.466524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.467462    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.614247    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:18.686853    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.721947    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.965775    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.966738    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.185186    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.222274    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.464580    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.466106    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.685875    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.721911    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.964671    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.966618    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.189311    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.222657    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.472199    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.473727    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.618297    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:20.686013    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.725067    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.965365    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.967580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.186187    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.222418    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.466257    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.468884    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.695334    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.722420    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.967200    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.968432    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.186507    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.223515    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.467776    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.468823    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.686524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.722555    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.966992    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.968380    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.127726    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:23.186075    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.221928    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.472480    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:23.475893    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.685670    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.722116    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.964675    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:23.966307    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.185949    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.221763    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.466929    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:24.467840    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.686500    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.723357    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.964654    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:24.966885    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.186770    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.224264    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.468835    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:25.472989    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.615515    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:25.685771    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.785531    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.966812    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:25.968108    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.187678    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.221680    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.469421    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.471831    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:26.686271    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.722724    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.967053    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:26.970512    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.185477    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.222867    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.466250    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:27.468931    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.688567    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.722803    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.967279    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:27.968910    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.120174    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:28.185963    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.222451    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.467731    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:28.469892    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.694484    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.723733    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.968844    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.970243    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:29.188565    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.222632    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.465237    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:29.470081    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.686314    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.721820    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.967077    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.968097    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:30.123378    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:30.186144    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.222392    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.467612    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.468606    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:30.685939    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.722292    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.966292    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.966498    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.187119    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.228397    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.469474    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.470647    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.687793    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.722951    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.966353    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:31.968952    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.191012    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.222245    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.493783    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:32.497471    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.616729    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:32.685402    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.723199    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.015091    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:33.016225    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.186185    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.222121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.464925    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:33.466305    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.685313    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.721937    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.965674    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.966000    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:34.188134    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.222521    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.465926    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:34.466210    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.685495    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.721879    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.967054    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.967585    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.123958    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:35.186850    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.222328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.474600    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.476259    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.688267    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.722698    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.966461    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:35.968904    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.189405    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.227105    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.467374    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.469201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:36.687641    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.723276    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.966516    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:36.966899    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.186027    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.221954    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.466693    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:37.467661    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.613599    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:37.685542    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.722282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.964239    8300 kapi.go:107] duration metric: took 38.50341222s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 13:40:37.966629    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.185126    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.221942    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.466888    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.686438    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.722778    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.966517    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.187282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.223207    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.466661    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.614336    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:39.691196    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.787651    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.966764    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.186084    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.222886    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.468879    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.687211    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.722327    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.968203    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.186447    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.222002    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.467195    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.614734    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:41.686099    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.722065    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.966707    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.185957    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.222263    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.466761    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.685295    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.722139    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.965961    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.185566    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.222396    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.466206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.685170    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.722217    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.965841    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.119018    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:44.185599    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.222258    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.472427    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.685456    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.722820    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.967061    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.186268    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.222582    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.467133    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.686006    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.722745    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.966725    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.122727    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:46.185705    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.222478    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.470580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.686007    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.723722    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.968777    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.189929    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.230753    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.468104    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.686508    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.735612    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.967384    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.186115    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.222574    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.467660    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.613831    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:48.686770    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.723180    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.967036    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.186235    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.222254    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.466998    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.688335    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.785244    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.965851    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.185449    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.222201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.466967    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.615703    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:50.688793    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.723351    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.967653    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.187020    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.222617    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.466384    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.685756    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.722480    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.966614    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.185552    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.221973    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.467049    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.685690    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.721869    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.966897    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.117849    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:53.185524    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.222512    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.466694    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.685231    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.726031    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.966413    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.186863    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.222376    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.466654    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.686527    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.722195    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.967206    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.120495    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:55.186636    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.222945    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.466704    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.686587    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.723177    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.967943    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.185945    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.230424    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.467580    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.686756    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.729151    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.967597    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.138863    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:57.187768    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.222620    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.467322    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.686340    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.722924    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.973862    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.187396    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.223433    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.471576    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.696283    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.722845    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.969617    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.144349    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:59.187326    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.224062    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.468904    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.685865    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.723300    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.967201    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.187328    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.223296    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.469948    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.689322    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.787865    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.967220    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.193121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:01.223356    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.466076    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.616801    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:01.685600    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:01.721282    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.968555    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.185627    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:02.221744    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.466461    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.685728    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:02.721776    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.966641    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:03.186738    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:03.222869    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:03.469224    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:03.685717    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:03.721948    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:03.967116    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:04.123040    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:04.186633    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:04.224324    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.466978    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:04.686272    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:04.722968    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.966310    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:05.185602    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:05.221985    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.466644    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:05.688078    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:05.722513    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.968444    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:06.193617    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:06.221743    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:06.466302    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:06.619047    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:06.687322    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:06.723683    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:06.966537    8300 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:07.188570    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:07.222504    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:07.468497    8300 kapi.go:107] duration metric: took 1m8.006680509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 13:41:07.685477    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:07.721750    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:08.186738    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:08.222996    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:08.619965    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:08.686078    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:08.722502    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:09.186302    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:09.222663    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:09.685631    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:09.721891    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:10.186594    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:10.222211    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:10.686002    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:10.722457    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:11.130059    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:11.186102    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:11.222916    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:11.684891    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:11.722141    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:12.193144    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:12.222686    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:12.689014    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:12.788604    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:13.186121    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:13.222010    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:13.613504    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:13.687201    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:13.722294    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:14.186028    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:14.223174    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:14.693278    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:14.722092    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:15.190467    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:15.224301    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:15.614338    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:15.685666    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:15.722373    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:16.185958    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:16.222136    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:16.685847    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:16.722238    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:17.185998    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:17.221972    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:17.685169    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:17.721825    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:18.123627    8300 pod_ready.go:103] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"False"
	I1014 13:41:18.187717    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:18.222369    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:18.685613    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:18.727245    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:19.122743    8300 pod_ready.go:93] pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace has status "Ready":"True"
	I1014 13:41:19.122765    8300 pod_ready.go:82] duration metric: took 1m7.015042829s for pod "metrics-server-84c5f94fbc-p68nc" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.122779    8300 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.132407    8300 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace has status "Ready":"True"
	I1014 13:41:19.132479    8300 pod_ready.go:82] duration metric: took 9.69201ms for pod "nvidia-device-plugin-daemonset-tnngr" in "kube-system" namespace to be "Ready" ...
	I1014 13:41:19.132516    8300 pod_ready.go:39] duration metric: took 1m10.489017042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:41:19.132561    8300 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:41:19.132608    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:19.132693    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:19.190302    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:19.211811    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:19.211879    8300 cri.go:89] found id: ""
	I1014 13:41:19.211901    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:19.211982    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.220526    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:19.220641    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:19.241773    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:19.276339    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:19.276410    8300 cri.go:89] found id: ""
	I1014 13:41:19.276433    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:19.276519    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.280479    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:19.280599    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:19.351441    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:19.351484    8300 cri.go:89] found id: ""
	I1014 13:41:19.351493    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:19.351555    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.355570    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:19.355656    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:19.413270    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:19.413304    8300 cri.go:89] found id: ""
	I1014 13:41:19.413313    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:19.413381    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.420849    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:19.420934    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:19.483334    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:19.483358    8300 cri.go:89] found id: ""
	I1014 13:41:19.483382    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:19.483446    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.487618    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:19.487717    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:19.551083    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:19.551142    8300 cri.go:89] found id: ""
	I1014 13:41:19.551158    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:19.551215    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.554787    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:19.554860    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:19.598310    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:19.598380    8300 cri.go:89] found id: ""
	I1014 13:41:19.598395    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:19.598462    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:19.601905    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:19.601926    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:19.665148    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:19.665224    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:19.682907    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:19.682933    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:19.687721    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:19.722161    8300 kapi.go:107] duration metric: took 1m16.503621459s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 13:41:19.725432    8300 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-002422 cluster.
	I1014 13:41:19.728014    8300 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 13:41:19.730593    8300 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 13:41:19.735037    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:19.735065    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:19.820881    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:19.820907    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:19.893884    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:19.893917    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:19.942315    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:19.942345    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:20.038083    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:20.038174    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:20.115418    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.115710    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.115919    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.116169    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.116400    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.116656    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:20.152836    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:20.152914    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:20.186646    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:20.356145    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:20.356173    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:20.412553    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:20.412587    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:20.475037    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:20.475086    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:20.544679    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:20.544710    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:20.545077    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:20.545098    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.545105    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.545119    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:20.545254    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:20.545261    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:20.545269    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:20.545282    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:20.686796    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:21.186158    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:21.686118    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:22.186247    8300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:22.685819    8300 kapi.go:107] duration metric: took 1m23.005216361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 13:41:22.688880    8300 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1014 13:41:22.691714    8300 addons.go:510] duration metric: took 1m30.184000639s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1014 13:41:30.546877    8300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:41:30.560342    8300 api_server.go:72] duration metric: took 1m38.053028566s to wait for apiserver process to appear ...
	I1014 13:41:30.560367    8300 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:41:30.560402    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:30.560461    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:30.601242    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:30.601265    8300 cri.go:89] found id: ""
	I1014 13:41:30.601273    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:30.601326    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.604628    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:30.604697    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:30.644984    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:30.645003    8300 cri.go:89] found id: ""
	I1014 13:41:30.645011    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:30.645062    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.648469    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:30.648536    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:30.697128    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:30.697146    8300 cri.go:89] found id: ""
	I1014 13:41:30.697153    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:30.697205    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.700974    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:30.701035    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:30.740346    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:30.740369    8300 cri.go:89] found id: ""
	I1014 13:41:30.740376    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:30.740429    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.743903    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:30.743969    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:30.783592    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:30.783616    8300 cri.go:89] found id: ""
	I1014 13:41:30.783624    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:30.783677    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.787072    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:30.787151    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:30.823473    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:30.823549    8300 cri.go:89] found id: ""
	I1014 13:41:30.823572    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:30.823651    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.827113    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:30.827178    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:30.865127    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:30.865151    8300 cri.go:89] found id: ""
	I1014 13:41:30.865161    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:30.865215    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:30.869618    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:30.869641    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:30.883538    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:30.883565    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:31.015963    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:31.015993    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:31.062622    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:31.062651    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:31.105622    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:31.105652    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:31.159051    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:31.159121    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:31.251884    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:31.251917    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:31.324970    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.325225    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.325410    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.325632    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.325817    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.326041    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:31.362482    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:31.362509    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:31.418997    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:31.419027    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:31.467919    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:31.467949    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:31.507864    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:31.507894    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:31.548235    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:31.548260    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:31.618475    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:31.618509    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:31.618562    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:31.618572    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.618580    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.618593    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:31.618601    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:31.618611    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:31.618619    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:31.618625    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:41.619260    8300 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 13:41:41.627573    8300 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 13:41:41.628499    8300 api_server.go:141] control plane version: v1.31.1
	I1014 13:41:41.628521    8300 api_server.go:131] duration metric: took 11.068146645s to wait for apiserver health ...
	I1014 13:41:41.628529    8300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:41:41.628550    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 13:41:41.628613    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 13:41:41.665965    8300 cri.go:89] found id: "8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:41.665995    8300 cri.go:89] found id: ""
	I1014 13:41:41.666002    8300 logs.go:282] 1 containers: [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74]
	I1014 13:41:41.666056    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.669487    8300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 13:41:41.669557    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 13:41:41.708562    8300 cri.go:89] found id: "1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:41.708585    8300 cri.go:89] found id: ""
	I1014 13:41:41.708593    8300 logs.go:282] 1 containers: [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896]
	I1014 13:41:41.708646    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.712178    8300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 13:41:41.712246    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 13:41:41.775326    8300 cri.go:89] found id: "ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:41.775347    8300 cri.go:89] found id: ""
	I1014 13:41:41.775355    8300 logs.go:282] 1 containers: [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f]
	I1014 13:41:41.775408    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.779511    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 13:41:41.779615    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 13:41:41.821335    8300 cri.go:89] found id: "62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:41.821356    8300 cri.go:89] found id: ""
	I1014 13:41:41.821363    8300 logs.go:282] 1 containers: [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8]
	I1014 13:41:41.821450    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.825710    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 13:41:41.825820    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 13:41:41.865087    8300 cri.go:89] found id: "09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:41.865108    8300 cri.go:89] found id: ""
	I1014 13:41:41.865116    8300 logs.go:282] 1 containers: [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255]
	I1014 13:41:41.865169    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.868563    8300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 13:41:41.868634    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 13:41:41.907304    8300 cri.go:89] found id: "3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:41.907327    8300 cri.go:89] found id: ""
	I1014 13:41:41.907335    8300 logs.go:282] 1 containers: [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8]
	I1014 13:41:41.907391    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.910857    8300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 13:41:41.910930    8300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 13:41:41.949718    8300 cri.go:89] found id: "47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:41.949744    8300 cri.go:89] found id: ""
	I1014 13:41:41.949752    8300 logs.go:282] 1 containers: [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e]
	I1014 13:41:41.949805    8300 ssh_runner.go:195] Run: which crictl
	I1014 13:41:41.953310    8300 logs.go:123] Gathering logs for kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] ...
	I1014 13:41:41.953338    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e"
	I1014 13:41:41.996585    8300 logs.go:123] Gathering logs for container status ...
	I1014 13:41:41.996615    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 13:41:42.050322    8300 logs.go:123] Gathering logs for kubelet ...
	I1014 13:41:42.050352    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 13:41:42.135143    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.630422    1493 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.135373    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.135558    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.135780    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.135963    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.136185    8300 logs.go:138] Found kubelet problem: Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:42.175445    8300 logs.go:123] Gathering logs for etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] ...
	I1014 13:41:42.175490    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896"
	I1014 13:41:42.232021    8300 logs.go:123] Gathering logs for kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] ...
	I1014 13:41:42.232058    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8"
	I1014 13:41:42.276952    8300 logs.go:123] Gathering logs for kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] ...
	I1014 13:41:42.276988    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255"
	I1014 13:41:42.319634    8300 logs.go:123] Gathering logs for kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] ...
	I1014 13:41:42.319660    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8"
	I1014 13:41:42.396472    8300 logs.go:123] Gathering logs for CRI-O ...
	I1014 13:41:42.396507    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 13:41:42.493405    8300 logs.go:123] Gathering logs for dmesg ...
	I1014 13:41:42.493438    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 13:41:42.505382    8300 logs.go:123] Gathering logs for describe nodes ...
	I1014 13:41:42.505410    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 13:41:42.639254    8300 logs.go:123] Gathering logs for kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] ...
	I1014 13:41:42.639286    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74"
	I1014 13:41:42.707467    8300 logs.go:123] Gathering logs for coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] ...
	I1014 13:41:42.707498    8300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f"
	I1014 13:41:42.750023    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:42.750051    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 13:41:42.750110    8300 out.go:270] X Problems detected in kubelet:
	W1014 13:41:42.750126    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.630469    1493 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.750135    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631414    1493 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.750146    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631450    1493 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	W1014 13:41:42.750153    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: W1014 13:40:08.631773    1493 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-002422" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-002422' and this object
	W1014 13:41:42.750205    8300 out.go:270]   Oct 14 13:40:08 addons-002422 kubelet[1493]: E1014 13:40:08.631801    1493 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-002422\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-002422' and this object" logger="UnhandledError"
	I1014 13:41:42.750211    8300 out.go:358] Setting ErrFile to fd 2...
	I1014 13:41:42.750218    8300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:41:52.761041    8300 system_pods.go:59] 18 kube-system pods found
	I1014 13:41:52.761086    8300 system_pods.go:61] "coredns-7c65d6cfc9-bsnhb" [1719c402-d9cd-43d4-af23-a0333df02866] Running
	I1014 13:41:52.761095    8300 system_pods.go:61] "csi-hostpath-attacher-0" [1e5df543-7e1e-48cb-9857-ad4fa55eecc3] Running
	I1014 13:41:52.761101    8300 system_pods.go:61] "csi-hostpath-resizer-0" [3aacd79a-b371-4b56-bf98-d444c83b9439] Running
	I1014 13:41:52.761128    8300 system_pods.go:61] "csi-hostpathplugin-jrvhl" [cd5f386d-cfc5-4dc6-9ec6-5643a4184f8c] Running
	I1014 13:41:52.761139    8300 system_pods.go:61] "etcd-addons-002422" [055ec4e6-1017-4a4e-be4f-7a71bf7807a4] Running
	I1014 13:41:52.761144    8300 system_pods.go:61] "kindnet-xjsm2" [e0634e3a-e89d-46c3-befa-fa9f56e48570] Running
	I1014 13:41:52.761149    8300 system_pods.go:61] "kube-apiserver-addons-002422" [125f5bf2-9f9b-4b6f-b862-494aa9801820] Running
	I1014 13:41:52.761153    8300 system_pods.go:61] "kube-controller-manager-addons-002422" [a31d6a59-7270-4061-92a4-5065ef2d5330] Running
	I1014 13:41:52.761165    8300 system_pods.go:61] "kube-ingress-dns-minikube" [85b77aed-3ee1-4f75-97b3-879fb269f534] Running
	I1014 13:41:52.761169    8300 system_pods.go:61] "kube-proxy-l8cm8" [c57ee3d5-8ab2-46bd-b68b-80f6c3904d40] Running
	I1014 13:41:52.761174    8300 system_pods.go:61] "kube-scheduler-addons-002422" [1dc281ca-83cd-4762-9821-4e17445ccfea] Running
	I1014 13:41:52.761180    8300 system_pods.go:61] "metrics-server-84c5f94fbc-p68nc" [344d0c1c-bbea-4de6-a079-724c18606d38] Running
	I1014 13:41:52.761185    8300 system_pods.go:61] "nvidia-device-plugin-daemonset-tnngr" [a113dbce-1d95-437b-83fc-dd34499d10e4] Running
	I1014 13:41:52.761210    8300 system_pods.go:61] "registry-66c9cd494c-ddkrt" [091b0f03-dc90-4b2b-bbd3-c73a13edd832] Running
	I1014 13:41:52.761220    8300 system_pods.go:61] "registry-proxy-wjht4" [7f1138a2-5ec8-4c04-a3b7-fdb6c0af33aa] Running
	I1014 13:41:52.761224    8300 system_pods.go:61] "snapshot-controller-56fcc65765-d9p5h" [272bc704-122e-4ffe-a624-e7051cb8832f] Running
	I1014 13:41:52.761229    8300 system_pods.go:61] "snapshot-controller-56fcc65765-pq9xk" [c3e18049-be5f-43ff-a507-33cabb741de9] Running
	I1014 13:41:52.761236    8300 system_pods.go:61] "storage-provisioner" [832679c2-ca50-4565-b1cd-90c63d11988b] Running
	I1014 13:41:52.761243    8300 system_pods.go:74] duration metric: took 11.132707132s to wait for pod list to return data ...
	I1014 13:41:52.761252    8300 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:41:52.763788    8300 default_sa.go:45] found service account: "default"
	I1014 13:41:52.763813    8300 default_sa.go:55] duration metric: took 2.550674ms for default service account to be created ...
	I1014 13:41:52.763822    8300 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:41:52.773891    8300 system_pods.go:86] 18 kube-system pods found
	I1014 13:41:52.773928    8300 system_pods.go:89] "coredns-7c65d6cfc9-bsnhb" [1719c402-d9cd-43d4-af23-a0333df02866] Running
	I1014 13:41:52.773936    8300 system_pods.go:89] "csi-hostpath-attacher-0" [1e5df543-7e1e-48cb-9857-ad4fa55eecc3] Running
	I1014 13:41:52.773941    8300 system_pods.go:89] "csi-hostpath-resizer-0" [3aacd79a-b371-4b56-bf98-d444c83b9439] Running
	I1014 13:41:52.773969    8300 system_pods.go:89] "csi-hostpathplugin-jrvhl" [cd5f386d-cfc5-4dc6-9ec6-5643a4184f8c] Running
	I1014 13:41:52.773981    8300 system_pods.go:89] "etcd-addons-002422" [055ec4e6-1017-4a4e-be4f-7a71bf7807a4] Running
	I1014 13:41:52.773987    8300 system_pods.go:89] "kindnet-xjsm2" [e0634e3a-e89d-46c3-befa-fa9f56e48570] Running
	I1014 13:41:52.773993    8300 system_pods.go:89] "kube-apiserver-addons-002422" [125f5bf2-9f9b-4b6f-b862-494aa9801820] Running
	I1014 13:41:52.773997    8300 system_pods.go:89] "kube-controller-manager-addons-002422" [a31d6a59-7270-4061-92a4-5065ef2d5330] Running
	I1014 13:41:52.774002    8300 system_pods.go:89] "kube-ingress-dns-minikube" [85b77aed-3ee1-4f75-97b3-879fb269f534] Running
	I1014 13:41:52.774006    8300 system_pods.go:89] "kube-proxy-l8cm8" [c57ee3d5-8ab2-46bd-b68b-80f6c3904d40] Running
	I1014 13:41:52.774012    8300 system_pods.go:89] "kube-scheduler-addons-002422" [1dc281ca-83cd-4762-9821-4e17445ccfea] Running
	I1014 13:41:52.774017    8300 system_pods.go:89] "metrics-server-84c5f94fbc-p68nc" [344d0c1c-bbea-4de6-a079-724c18606d38] Running
	I1014 13:41:52.774021    8300 system_pods.go:89] "nvidia-device-plugin-daemonset-tnngr" [a113dbce-1d95-437b-83fc-dd34499d10e4] Running
	I1014 13:41:52.774024    8300 system_pods.go:89] "registry-66c9cd494c-ddkrt" [091b0f03-dc90-4b2b-bbd3-c73a13edd832] Running
	I1014 13:41:52.774028    8300 system_pods.go:89] "registry-proxy-wjht4" [7f1138a2-5ec8-4c04-a3b7-fdb6c0af33aa] Running
	I1014 13:41:52.774054    8300 system_pods.go:89] "snapshot-controller-56fcc65765-d9p5h" [272bc704-122e-4ffe-a624-e7051cb8832f] Running
	I1014 13:41:52.774059    8300 system_pods.go:89] "snapshot-controller-56fcc65765-pq9xk" [c3e18049-be5f-43ff-a507-33cabb741de9] Running
	I1014 13:41:52.774063    8300 system_pods.go:89] "storage-provisioner" [832679c2-ca50-4565-b1cd-90c63d11988b] Running
	I1014 13:41:52.774071    8300 system_pods.go:126] duration metric: took 10.242384ms to wait for k8s-apps to be running ...
	I1014 13:41:52.774078    8300 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:41:52.774154    8300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:41:52.786726    8300 system_svc.go:56] duration metric: took 12.638293ms WaitForService to wait for kubelet
	I1014 13:41:52.786757    8300 kubeadm.go:582] duration metric: took 2m0.279448218s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:41:52.786776    8300 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:41:52.790212    8300 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 13:41:52.790247    8300 node_conditions.go:123] node cpu capacity is 2
	I1014 13:41:52.790259    8300 node_conditions.go:105] duration metric: took 3.477745ms to run NodePressure ...
	I1014 13:41:52.790270    8300 start.go:241] waiting for startup goroutines ...
	I1014 13:41:52.790278    8300 start.go:246] waiting for cluster config update ...
	I1014 13:41:52.790293    8300 start.go:255] writing updated cluster config ...
	I1014 13:41:52.790588    8300 ssh_runner.go:195] Run: rm -f paused
	I1014 13:41:53.192225    8300 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:41:53.193794    8300 out.go:177] * Done! kubectl is now configured to use "addons-002422" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 13:47:07 addons-002422 conmon[3587]: conmon 393d92e1891bd7e27285 <ninfo>: container 3598 exited with status 137
	Oct 14 13:47:07 addons-002422 crio[969]: time="2024-10-14 13:47:07.423405655Z" level=info msg="Stopped container 393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616: local-path-storage/local-path-provisioner-86d989889c-8sdx4/local-path-provisioner" id=e1aac363-a889-418e-b600-7ec53804ea6b name=/runtime.v1.RuntimeService/StopContainer
	Oct 14 13:47:07 addons-002422 crio[969]: time="2024-10-14 13:47:07.423913871Z" level=info msg="Stopping pod sandbox: 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=c6616074-dcd5-47fb-bdaa-41efef74e4bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:07 addons-002422 crio[969]: time="2024-10-14 13:47:07.424133178Z" level=info msg="Got pod network &{Name:local-path-provisioner-86d989889c-8sdx4 Namespace:local-path-storage ID:3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423 UID:d9f1b3cb-6759-4b3d-bd83-10d3da28c9dc NetNS:/var/run/netns/61b88456-4286-4195-8a12-2bf8bc413939 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 14 13:47:07 addons-002422 crio[969]: time="2024-10-14 13:47:07.424276734Z" level=info msg="Deleting pod local-path-storage_local-path-provisioner-86d989889c-8sdx4 from CNI network \"kindnet\" (type=ptp)"
	Oct 14 13:47:07 addons-002422 crio[969]: time="2024-10-14 13:47:07.462648282Z" level=info msg="Stopped pod sandbox: 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=c6616074-dcd5-47fb-bdaa-41efef74e4bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:08 addons-002422 crio[969]: time="2024-10-14 13:47:08.339148488Z" level=info msg="Removing container: 393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616" id=76c87d76-854a-4252-923f-6f1513cdfb23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 13:47:08 addons-002422 crio[969]: time="2024-10-14 13:47:08.358966400Z" level=info msg="Removed container 393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616: local-path-storage/local-path-provisioner-86d989889c-8sdx4/local-path-provisioner" id=76c87d76-854a-4252-923f-6f1513cdfb23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.352332613Z" level=info msg="Stopping container: fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386 (timeout: 30s)" id=01726173-535c-4ea8-b3ce-88ef81a070ea name=/runtime.v1.RuntimeService/StopContainer
	Oct 14 13:47:26 addons-002422 conmon[3015]: conmon fa367e6127e279f39e3f <ninfo>: container 3026 exited with status 2
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.499559756Z" level=info msg="Stopped container fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386: default/cloud-spanner-emulator-5b584cc74-fwt5t/cloud-spanner-emulator" id=01726173-535c-4ea8-b3ce-88ef81a070ea name=/runtime.v1.RuntimeService/StopContainer
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.500121289Z" level=info msg="Stopping pod sandbox: f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=4d3ca065-654c-4685-879a-0b319da45d07 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.500359180Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-5b584cc74-fwt5t Namespace:default ID:f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b UID:86f73907-eaae-4e0f-a065-402b32cc3a03 NetNS:/var/run/netns/b5c71a6d-3048-49f3-b7b3-41e650678344 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.500534080Z" level=info msg="Deleting pod default_cloud-spanner-emulator-5b584cc74-fwt5t from CNI network \"kindnet\" (type=ptp)"
	Oct 14 13:47:26 addons-002422 crio[969]: time="2024-10-14 13:47:26.526687183Z" level=info msg="Stopped pod sandbox: f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=4d3ca065-654c-4685-879a-0b319da45d07 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:27 addons-002422 crio[969]: time="2024-10-14 13:47:27.373073486Z" level=info msg="Removing container: fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386" id=1a651361-b95b-4700-8a13-e1682126b664 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 13:47:27 addons-002422 crio[969]: time="2024-10-14 13:47:27.391673809Z" level=info msg="Removed container fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386: default/cloud-spanner-emulator-5b584cc74-fwt5t/cloud-spanner-emulator" id=1a651361-b95b-4700-8a13-e1682126b664 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.753403580Z" level=info msg="Stopping pod sandbox: 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=229c9258-524b-4a8d-b8fc-8961c1d4321f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.753449086Z" level=info msg="Stopped pod sandbox (already stopped): 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=229c9258-524b-4a8d-b8fc-8961c1d4321f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.754133671Z" level=info msg="Removing pod sandbox: 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=a42d379f-0f9c-42af-8856-f4c02fb7d70e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.763464097Z" level=info msg="Removed pod sandbox: 3c6acef92b0a241e5e372c986d914cde98ea63563a06370cd6d3fdc125ddc423" id=a42d379f-0f9c-42af-8856-f4c02fb7d70e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.764018623Z" level=info msg="Stopping pod sandbox: f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=03add572-678c-4958-8240-271cf4081b6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.764054799Z" level=info msg="Stopped pod sandbox (already stopped): f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=03add572-678c-4958-8240-271cf4081b6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.764365699Z" level=info msg="Removing pod sandbox: f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=584dcb1c-1056-45ee-83d2-aff1225b5bd3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 13:47:47 addons-002422 crio[969]: time="2024-10-14 13:47:47.774069507Z" level=info msg="Removed pod sandbox: f8152ffb6c4a3484698bf373f090286a904de10df108a09e459aa2c07df7486b" id=584dcb1c-1056-45ee-83d2-aff1225b5bd3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7581e29d62f8c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   fa1f79a1ddb53       hello-world-app-55bf9c44b4-pfhmd
	a7421d31433bb       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   35c7a64d1ead9       nginx
	0a3873b6a1313       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   0b9b34a5ff6d3       busybox
	57a5d29f5a270       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   8 minutes ago       Running             metrics-server            0                   f5e4a601392aa       metrics-server-84c5f94fbc-p68nc
	ada184f93dd5b       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        8 minutes ago       Running             coredns                   0                   daba31545a435       coredns-7c65d6cfc9-bsnhb
	749f7ebdaeaf5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   1c28befd43fbe       storage-provisioner
	47e55f64e180f       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                      8 minutes ago       Running             kindnet-cni               0                   d3f853ecbc8ad       kindnet-xjsm2
	09ddfab546738       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        8 minutes ago       Running             kube-proxy                0                   8d6d9e6d67223       kube-proxy-l8cm8
	1028165ec0621       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        8 minutes ago       Running             etcd                      0                   04b6b690c81f9       etcd-addons-002422
	62098d1172497       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        8 minutes ago       Running             kube-scheduler            0                   1d827eff7713c       kube-scheduler-addons-002422
	3e4cf70c88184       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        8 minutes ago       Running             kube-controller-manager   0                   76c74a21d4af4       kube-controller-manager-addons-002422
	8b5eecbb1fe82       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        8 minutes ago       Running             kube-apiserver            0                   93ae1f0de0f96       kube-apiserver-addons-002422
	
	
	==> coredns [ada184f93dd5b79075f6ed44ddcb44b635fe24c3c69ea498e797aabea7f5ee5f] <==
	[INFO] 10.244.0.20:41378 - 55723 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001139534s
	[INFO] 10.244.0.20:41378 - 57711 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00133018s
	[INFO] 10.244.0.20:43628 - 38825 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002424208s
	[INFO] 10.244.0.20:41378 - 8394 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00194329s
	[INFO] 10.244.0.20:43628 - 3925 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001462733s
	[INFO] 10.244.0.20:41378 - 56080 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00013088s
	[INFO] 10.244.0.20:43628 - 50011 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004722s
	[INFO] 10.244.0.20:37040 - 39124 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000135975s
	[INFO] 10.244.0.20:47941 - 16722 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006921s
	[INFO] 10.244.0.20:37040 - 40132 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000137165s
	[INFO] 10.244.0.20:47941 - 48626 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000325054s
	[INFO] 10.244.0.20:47941 - 27590 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077612s
	[INFO] 10.244.0.20:37040 - 26942 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000241272s
	[INFO] 10.244.0.20:47941 - 24446 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063835s
	[INFO] 10.244.0.20:37040 - 5277 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044029s
	[INFO] 10.244.0.20:47941 - 58132 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106634s
	[INFO] 10.244.0.20:37040 - 28746 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000121247s
	[INFO] 10.244.0.20:37040 - 23870 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104107s
	[INFO] 10.244.0.20:47941 - 28819 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00068173s
	[INFO] 10.244.0.20:37040 - 30890 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001596658s
	[INFO] 10.244.0.20:47941 - 15070 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001633563s
	[INFO] 10.244.0.20:47941 - 14827 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00140428s
	[INFO] 10.244.0.20:47941 - 3993 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053932s
	[INFO] 10.244.0.20:37040 - 50903 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002147655s
	[INFO] 10.244.0.20:37040 - 54380 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053891s
	
	
	==> describe nodes <==
	Name:               addons-002422
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-002422
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=addons-002422
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-002422
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-002422
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:48:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:46:55 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:46:55 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:46:55 +0000   Mon, 14 Oct 2024 13:39:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:46:55 +0000   Mon, 14 Oct 2024 13:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-002422
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 216d99f7dc424e599d6a70e41b29e088
	  System UUID:                51be1b84-8333-4024-a862-c04d66a5271b
	  Boot ID:                    c1fb5e99-d9c3-4e62-b114-4b2c9a33f58a
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  default                     hello-world-app-55bf9c44b4-pfhmd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-7c65d6cfc9-bsnhb                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m24s
	  kube-system                 etcd-addons-002422                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m29s
	  kube-system                 kindnet-xjsm2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m25s
	  kube-system                 kube-apiserver-addons-002422             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-addons-002422    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-l8cm8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-addons-002422             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 metrics-server-84c5f94fbc-p68nc          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         8m19s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m23s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m36s (x8 over 8m36s)  kubelet          Node addons-002422 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m36s (x8 over 8m36s)  kubelet          Node addons-002422 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m36s (x7 over 8m36s)  kubelet          Node addons-002422 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m29s (x2 over 8m29s)  kubelet          Node addons-002422 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x2 over 8m29s)  kubelet          Node addons-002422 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x2 over 8m29s)  kubelet          Node addons-002422 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m25s                  node-controller  Node addons-002422 event: Registered Node addons-002422 in Controller
	  Normal   NodeReady                8m8s                   kubelet          Node addons-002422 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014835] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.475618] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.053479] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015843] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.695923] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.686422] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [1028165ec062157439f733ab6a35f8de542a7bec1f3b417ae6d993ec6d72f896] <==
	{"level":"info","ts":"2024-10-14T13:39:41.673517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:39:41.674471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T13:39:41.677088Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.677213Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.677265Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:41.746155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-14T13:39:55.704344Z","caller":"traceutil/trace.go:171","msg":"trace[1703535512] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"125.061507ms","start":"2024-10-14T13:39:55.579265Z","end":"2024-10-14T13:39:55.704327Z","steps":["trace[1703535512] 'process raft request'  (duration: 100.74032ms)","trace[1703535512] 'compare'  (duration: 24.075148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:39:55.709244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.350627ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:55.742502Z","caller":"traceutil/trace.go:171","msg":"trace[1742711474] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:398; }","duration":"137.617824ms","start":"2024-10-14T13:39:55.604866Z","end":"2024-10-14T13:39:55.742484Z","steps":["trace[1742711474] 'agreement among raft nodes before linearized reading'  (duration: 104.309799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:55.709463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.395559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-14T13:39:55.743038Z","caller":"traceutil/trace.go:171","msg":"trace[1917448555] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:400; }","duration":"137.96655ms","start":"2024-10-14T13:39:55.605059Z","end":"2024-10-14T13:39:55.743026Z","steps":["trace[1917448555] 'agreement among raft nodes before linearized reading'  (duration: 104.369401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.559335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.136832ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032554294518971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:399 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-14T13:39:56.567126Z","caller":"traceutil/trace.go:171","msg":"trace[696180709] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"201.90427ms","start":"2024-10-14T13:39:56.365204Z","end":"2024-10-14T13:39:56.567108Z","steps":["trace[696180709] 'process raft request'  (duration: 89.922009ms)","trace[696180709] 'compare'  (duration: 101.057432ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:39:56.567433Z","caller":"traceutil/trace.go:171","msg":"trace[208790586] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"202.128434ms","start":"2024-10-14T13:39:56.365293Z","end":"2024-10-14T13:39:56.567421Z","steps":["trace[208790586] 'process raft request'  (duration: 194.656711ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567731Z","caller":"traceutil/trace.go:171","msg":"trace[2018964444] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"202.214382ms","start":"2024-10-14T13:39:56.365509Z","end":"2024-10-14T13:39:56.567723Z","steps":["trace[2018964444] 'process raft request'  (duration: 194.531967ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567891Z","caller":"traceutil/trace.go:171","msg":"trace[1274013744] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"195.842423ms","start":"2024-10-14T13:39:56.372041Z","end":"2024-10-14T13:39:56.567884Z","steps":["trace[1274013744] 'process raft request'  (duration: 188.046179ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567917Z","caller":"traceutil/trace.go:171","msg":"trace[1855707437] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"195.803604ms","start":"2024-10-14T13:39:56.372108Z","end":"2024-10-14T13:39:56.567912Z","steps":["trace[1855707437] 'process raft request'  (duration: 188.006999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.568946Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.89416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:56.601194Z","caller":"traceutil/trace.go:171","msg":"trace[345260323] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:413; }","duration":"195.413853ms","start":"2024-10-14T13:39:56.405766Z","end":"2024-10-14T13:39:56.601180Z","steps":["trace[345260323] 'agreement among raft nodes before linearized reading'  (duration: 161.872631ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:39:56.567358Z","caller":"traceutil/trace.go:171","msg":"trace[1219537254] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:420; }","duration":"161.573333ms","start":"2024-10-14T13:39:56.405771Z","end":"2024-10-14T13:39:56.567344Z","steps":["trace[1219537254] 'read index received'  (duration: 6.757042ms)","trace[1219537254] 'applied index is now lower than readState.Index'  (duration: 154.814182ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:39:56.569116Z","caller":"traceutil/trace.go:171","msg":"trace[1142324761] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"143.450437ms","start":"2024-10-14T13:39:56.425656Z","end":"2024-10-14T13:39:56.569106Z","steps":["trace[1142324761] 'process raft request'  (duration: 142.359271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.614133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.027147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-002422\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-10-14T13:39:56.614713Z","caller":"traceutil/trace.go:171","msg":"trace[111109841] range","detail":"{range_begin:/registry/minions/addons-002422; range_end:; response_count:1; response_revision:418; }","duration":"159.613731ms","start":"2024-10-14T13:39:56.455086Z","end":"2024-10-14T13:39:56.614700Z","steps":["trace[111109841] 'agreement among raft nodes before linearized reading'  (duration: 159.001309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:39:56.614968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.299264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:39:56.616701Z","caller":"traceutil/trace.go:171","msg":"trace[633126474] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:418; }","duration":"191.08861ms","start":"2024-10-14T13:39:56.425600Z","end":"2024-10-14T13:39:56.616689Z","steps":["trace[633126474] 'agreement among raft nodes before linearized reading'  (duration: 189.273172ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:48:16 up 30 min,  0 users,  load average: 0.53, 0.54, 0.42
	Linux addons-002422 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [47e55f64e180ffb927512f85e202bb19ab2a989edeae9f3711eb8b4b9204e17e] <==
	I1014 13:46:08.444845       1 main.go:300] handling current node
	I1014 13:46:18.444333       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:46:18.444384       1 main.go:300] handling current node
	I1014 13:46:28.444893       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:46:28.444938       1 main.go:300] handling current node
	I1014 13:46:38.444888       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:46:38.444921       1 main.go:300] handling current node
	I1014 13:46:48.444840       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:46:48.444973       1 main.go:300] handling current node
	I1014 13:46:58.444766       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:46:58.444798       1 main.go:300] handling current node
	I1014 13:47:08.444559       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:08.444591       1 main.go:300] handling current node
	I1014 13:47:18.444033       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:18.444069       1 main.go:300] handling current node
	I1014 13:47:28.444531       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:28.444560       1 main.go:300] handling current node
	I1014 13:47:38.444692       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:38.444721       1 main.go:300] handling current node
	I1014 13:47:48.445596       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:48.445628       1 main.go:300] handling current node
	I1014 13:47:58.444033       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:47:58.444066       1 main.go:300] handling current node
	I1014 13:48:08.445941       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:48:08.445974       1 main.go:300] handling current node
	
	
	==> kube-apiserver [8b5eecbb1fe82d3ce49c4e32d7e54fd8dad0e826a894d6005ed7aac0c04bef74] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 13:41:19.102837       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1014 13:42:05.216035       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44098: use of closed network connection
	E1014 13:42:05.455514       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44118: use of closed network connection
	I1014 13:42:14.866585       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.229.9"}
	I1014 13:43:03.063091       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1014 13:43:17.774362       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.774425       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.844707       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.844880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.905822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.905940       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:43:17.942397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:43:17.942432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1014 13:43:18.908004       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1014 13:43:18.942558       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1014 13:43:19.035704       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1014 13:43:31.519357       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1014 13:43:32.552131       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1014 13:43:37.064523       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 13:43:37.356768       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.40.124"}
	I1014 13:45:57.249198       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.166.184"}
	E1014 13:46:01.498539       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1014 13:46:52.562592       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [3e4cf70c881841234f19b88ecc5497bac13aae34c6605d6a448de2ce998ca7a8] <==
	W1014 13:46:21.351022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:21.351064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:46:22.311961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="7.303µs"
	I1014 13:46:24.709781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-002422"
	I1014 13:46:32.427851       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1014 13:46:37.260332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="5.588µs"
	W1014 13:46:39.720928       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:39.720969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:46:48.887321       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:48.887362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:46:55.575852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-002422"
	W1014 13:46:58.117101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:58.117144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:47:10.799494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:47:10.799538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:47:24.788424       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I1014 13:47:26.334775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="4.865µs"
	W1014 13:47:27.508301       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:47:27.508441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:47:34.600056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:47:34.600098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:47:48.864363       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:47:48.864406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:47:57.808224       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:47:57.808265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [09ddfab546738d1eab72c46ef4b7d84c7c88574b12d387c037cceeaf1a909255] <==
	I1014 13:39:52.294288       1 server_linux.go:66] "Using iptables proxy"
	I1014 13:39:52.394712       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1014 13:39:52.394871       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:39:52.421809       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 13:39:52.421919       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:39:52.425428       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:39:52.439398       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:39:52.439423       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:39:52.440582       1 config.go:199] "Starting service config controller"
	I1014 13:39:52.440648       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:39:52.444864       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:39:52.444953       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:39:52.445458       1 config.go:328] "Starting node config controller"
	I1014 13:39:52.445546       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:39:52.548284       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:39:52.548384       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:39:52.548437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62098d11724974c824d47af9ef75592c9f29ddecba7112f8fd3fed3c259db4b8] <==
	W1014 13:39:45.325072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.325155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:39:45.325348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:39:45.325501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.325587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.325633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.326335       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:45.326407       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:45.326552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 13:39:45.326605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.326778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 13:39:45.326824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.328929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 13:39:45.328970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 13:39:45.329066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.329201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.329321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.329036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:45.329418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:39:46.516822       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 13:47:08 addons-002422 kubelet[1493]: I1014 13:47:08.337704    1493 scope.go:117] "RemoveContainer" containerID="393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616"
	Oct 14 13:47:08 addons-002422 kubelet[1493]: I1014 13:47:08.359410    1493 scope.go:117] "RemoveContainer" containerID="393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616"
	Oct 14 13:47:08 addons-002422 kubelet[1493]: E1014 13:47:08.359794    1493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616\": container with ID starting with 393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616 not found: ID does not exist" containerID="393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616"
	Oct 14 13:47:08 addons-002422 kubelet[1493]: I1014 13:47:08.359832    1493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616"} err="failed to get container status \"393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616\": rpc error: code = NotFound desc = could not find container \"393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616\": container with ID starting with 393d92e1891bd7e27285ddc1eba69f35c09e9b8f55a11bf0030af0bf0d079616 not found: ID does not exist"
	Oct 14 13:47:09 addons-002422 kubelet[1493]: I1014 13:47:09.228666    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f1b3cb-6759-4b3d-bd83-10d3da28c9dc" path="/var/lib/kubelet/pods/d9f1b3cb-6759-4b3d-bd83-10d3da28c9dc/volumes"
	Oct 14 13:47:17 addons-002422 kubelet[1493]: E1014 13:47:17.435769    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913637435524959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:17 addons-002422 kubelet[1493]: E1014 13:47:17.435809    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913637435524959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:26 addons-002422 kubelet[1493]: I1014 13:47:26.710136    1493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nj7r\" (UniqueName: \"kubernetes.io/projected/86f73907-eaae-4e0f-a065-402b32cc3a03-kube-api-access-5nj7r\") pod \"86f73907-eaae-4e0f-a065-402b32cc3a03\" (UID: \"86f73907-eaae-4e0f-a065-402b32cc3a03\") "
	Oct 14 13:47:26 addons-002422 kubelet[1493]: I1014 13:47:26.714197    1493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f73907-eaae-4e0f-a065-402b32cc3a03-kube-api-access-5nj7r" (OuterVolumeSpecName: "kube-api-access-5nj7r") pod "86f73907-eaae-4e0f-a065-402b32cc3a03" (UID: "86f73907-eaae-4e0f-a065-402b32cc3a03"). InnerVolumeSpecName "kube-api-access-5nj7r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 14 13:47:26 addons-002422 kubelet[1493]: I1014 13:47:26.810491    1493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5nj7r\" (UniqueName: \"kubernetes.io/projected/86f73907-eaae-4e0f-a065-402b32cc3a03-kube-api-access-5nj7r\") on node \"addons-002422\" DevicePath \"\""
	Oct 14 13:47:27 addons-002422 kubelet[1493]: I1014 13:47:27.372059    1493 scope.go:117] "RemoveContainer" containerID="fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386"
	Oct 14 13:47:27 addons-002422 kubelet[1493]: I1014 13:47:27.392038    1493 scope.go:117] "RemoveContainer" containerID="fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386"
	Oct 14 13:47:27 addons-002422 kubelet[1493]: E1014 13:47:27.392404    1493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386\": container with ID starting with fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386 not found: ID does not exist" containerID="fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386"
	Oct 14 13:47:27 addons-002422 kubelet[1493]: I1014 13:47:27.392438    1493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386"} err="failed to get container status \"fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386\": rpc error: code = NotFound desc = could not find container \"fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386\": container with ID starting with fa367e6127e279f39e3fe764bdcb0546d7243ac35fcb13dd8cf95decace4a386 not found: ID does not exist"
	Oct 14 13:47:27 addons-002422 kubelet[1493]: E1014 13:47:27.438173    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913647437922510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:27 addons-002422 kubelet[1493]: E1014 13:47:27.438219    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913647437922510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:29 addons-002422 kubelet[1493]: I1014 13:47:29.228160    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f73907-eaae-4e0f-a065-402b32cc3a03" path="/var/lib/kubelet/pods/86f73907-eaae-4e0f-a065-402b32cc3a03/volumes"
	Oct 14 13:47:37 addons-002422 kubelet[1493]: E1014 13:47:37.440832    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913657440580423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:37 addons-002422 kubelet[1493]: E1014 13:47:37.440868    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913657440580423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:47 addons-002422 kubelet[1493]: E1014 13:47:47.443641    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913667443419349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:47 addons-002422 kubelet[1493]: E1014 13:47:47.443677    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913667443419349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:57 addons-002422 kubelet[1493]: E1014 13:47:57.447073    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913677446782819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:47:57 addons-002422 kubelet[1493]: E1014 13:47:57.447108    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913677446782819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:48:07 addons-002422 kubelet[1493]: E1014 13:48:07.449801    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913687449605013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:48:07 addons-002422 kubelet[1493]: E1014 13:48:07.449836    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913687449605013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [749f7ebdaeaf50739e47418bda3ae0c2d5a85bd04259b5f9d851861c9e661f83] <==
	I1014 13:40:09.372284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:40:09.406109       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:40:09.406166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:40:09.433641       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:40:09.434046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150!
	I1014 13:40:09.435321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8963a5d4-969c-4353-a393-1ec58810a372", APIVersion:"v1", ResourceVersion:"902", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150 became leader
	I1014 13:40:09.535214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-002422_b643fb17-4d87-4a06-8a88-cc3ffff5f150!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-002422 -n addons-002422
helpers_test.go:261: (dbg) Run:  kubectl --context addons-002422 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (346.12s)

                                                
                                    

Test pass (297/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.23
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 171.73
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/PullSecret 11.84
34 TestAddons/parallel/Registry 17.46
36 TestAddons/parallel/InspektorGadget 11.71
39 TestAddons/parallel/CSI 53.58
40 TestAddons/parallel/Headlamp 17.99
41 TestAddons/parallel/CloudSpanner 6.54
42 TestAddons/parallel/LocalPath 52.32
43 TestAddons/parallel/NvidiaDevicePlugin 6.52
44 TestAddons/parallel/Yakd 11.71
46 TestAddons/StoppedEnableDisable 12.18
47 TestCertOptions 38.07
48 TestCertExpiration 240.47
50 TestForceSystemdFlag 41.34
51 TestForceSystemdEnv 40.39
57 TestErrorSpam/setup 31.85
58 TestErrorSpam/start 0.77
59 TestErrorSpam/status 1.03
60 TestErrorSpam/pause 1.7
61 TestErrorSpam/unpause 1.73
62 TestErrorSpam/stop 1.48
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 47.59
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 29.44
69 TestFunctional/serial/KubeContext 0.06
70 TestFunctional/serial/KubectlGetPods 0.1
73 TestFunctional/serial/CacheCmd/cache/add_remote 4.14
74 TestFunctional/serial/CacheCmd/cache/add_local 1.38
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
76 TestFunctional/serial/CacheCmd/cache/list 0.06
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
79 TestFunctional/serial/CacheCmd/cache/delete 0.12
80 TestFunctional/serial/MinikubeKubectlCmd 0.16
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
82 TestFunctional/serial/ExtraConfig 36.23
83 TestFunctional/serial/ComponentHealth 0.1
84 TestFunctional/serial/LogsCmd 1.65
85 TestFunctional/serial/LogsFileCmd 1.65
86 TestFunctional/serial/InvalidService 4.51
88 TestFunctional/parallel/ConfigCmd 0.48
89 TestFunctional/parallel/DashboardCmd 10.21
90 TestFunctional/parallel/DryRun 0.42
91 TestFunctional/parallel/InternationalLanguage 0.19
92 TestFunctional/parallel/StatusCmd 1.11
96 TestFunctional/parallel/ServiceCmdConnect 11.65
97 TestFunctional/parallel/AddonsCmd 0.21
98 TestFunctional/parallel/PersistentVolumeClaim 26.53
100 TestFunctional/parallel/SSHCmd 0.63
101 TestFunctional/parallel/CpCmd 2.25
103 TestFunctional/parallel/FileSync 0.34
104 TestFunctional/parallel/CertSync 2.11
108 TestFunctional/parallel/NodeLabels 0.09
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
112 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
126 TestFunctional/parallel/ProfileCmd/profile_list 0.43
127 TestFunctional/parallel/ServiceCmd/List 0.56
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 9.7
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
132 TestFunctional/parallel/ServiceCmd/Format 0.47
133 TestFunctional/parallel/ServiceCmd/URL 0.48
134 TestFunctional/parallel/MountCmd/specific-port 2.03
135 TestFunctional/parallel/MountCmd/VerifyCleanup 2.44
136 TestFunctional/parallel/Version/short 0.1
137 TestFunctional/parallel/Version/components 1.27
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.61
143 TestFunctional/parallel/ImageCommands/Setup 0.72
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.58
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 175.04
161 TestMultiControlPlane/serial/DeployApp 8.14
162 TestMultiControlPlane/serial/PingHostFromPods 1.5
163 TestMultiControlPlane/serial/AddWorkerNode 35.79
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
166 TestMultiControlPlane/serial/CopyFile 17.91
167 TestMultiControlPlane/serial/StopSecondaryNode 12.7
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
169 TestMultiControlPlane/serial/RestartSecondaryNode 21.56
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 165.9
172 TestMultiControlPlane/serial/DeleteSecondaryNode 12.63
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
174 TestMultiControlPlane/serial/StopCluster 35.82
175 TestMultiControlPlane/serial/RestartCluster 72.42
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
177 TestMultiControlPlane/serial/AddSecondaryNode 73.78
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
182 TestJSONOutput/start/Command 79.53
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.7
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.87
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
207 TestKicCustomNetwork/create_custom_network 36.98
208 TestKicCustomNetwork/use_default_bridge_network 31.86
209 TestKicExistingNetwork 32.06
210 TestKicCustomSubnet 34.07
211 TestKicStaticIP 35.65
212 TestMainNoArgs 0.06
213 TestMinikubeProfile 68.97
216 TestMountStart/serial/StartWithMountFirst 6.65
217 TestMountStart/serial/VerifyMountFirst 0.26
218 TestMountStart/serial/StartWithMountSecond 6.4
219 TestMountStart/serial/VerifyMountSecond 0.27
220 TestMountStart/serial/DeleteFirst 1.64
221 TestMountStart/serial/VerifyMountPostDelete 0.26
222 TestMountStart/serial/Stop 1.2
223 TestMountStart/serial/RestartStopped 7.49
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 78.14
228 TestMultiNode/serial/DeployApp2Nodes 7.56
229 TestMultiNode/serial/PingHostFrom2Pods 0.97
230 TestMultiNode/serial/AddNode 30.71
231 TestMultiNode/serial/MultiNodeLabels 0.1
232 TestMultiNode/serial/ProfileList 0.67
233 TestMultiNode/serial/CopyFile 9.52
234 TestMultiNode/serial/StopNode 2.18
235 TestMultiNode/serial/StartAfterStop 10.3
236 TestMultiNode/serial/RestartKeepsNodes 112.81
237 TestMultiNode/serial/DeleteNode 5.5
238 TestMultiNode/serial/StopMultiNode 23.83
239 TestMultiNode/serial/RestartMultiNode 54.06
240 TestMultiNode/serial/ValidateNameConflict 34.68
245 TestPreload 132.72
247 TestScheduledStopUnix 104.45
250 TestInsufficientStorage 10.2
251 TestRunningBinaryUpgrade 81.56
253 TestKubernetesUpgrade 381.25
254 TestMissingContainerUpgrade 169.5
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestNoKubernetes/serial/StartWithK8s 38.56
258 TestNoKubernetes/serial/StartWithStopK8s 20.03
259 TestNoKubernetes/serial/Start 5.79
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
261 TestNoKubernetes/serial/ProfileList 1.1
262 TestNoKubernetes/serial/Stop 1.27
263 TestNoKubernetes/serial/StartNoArgs 7.49
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
265 TestStoppedBinaryUpgrade/Setup 1.1
266 TestStoppedBinaryUpgrade/Upgrade 116.26
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
276 TestPause/serial/Start 49.13
277 TestPause/serial/SecondStartNoReconfiguration 40.17
278 TestPause/serial/Pause 0.9
279 TestPause/serial/VerifyStatus 0.32
280 TestPause/serial/Unpause 0.78
281 TestPause/serial/PauseAgain 1.36
282 TestPause/serial/DeletePaused 2.83
283 TestPause/serial/VerifyDeletedResources 0.41
291 TestNetworkPlugins/group/false 4.58
296 TestStartStop/group/old-k8s-version/serial/FirstStart 181.07
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.62
299 TestStartStop/group/old-k8s-version/serial/DeployApp 11.83
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.57
301 TestStartStop/group/old-k8s-version/serial/Stop 12.24
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
303 TestStartStop/group/old-k8s-version/serial/SecondStart 148.39
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.6
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.71
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
312 TestStartStop/group/old-k8s-version/serial/Pause 2.92
314 TestStartStop/group/embed-certs/serial/FirstStart 55.47
315 TestStartStop/group/embed-certs/serial/DeployApp 9.34
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
317 TestStartStop/group/embed-certs/serial/Stop 11.98
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/embed-certs/serial/SecondStart 301.75
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
323 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
325 TestStartStop/group/no-preload/serial/FirstStart 60.25
326 TestStartStop/group/no-preload/serial/DeployApp 12.36
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
328 TestStartStop/group/no-preload/serial/Stop 12.02
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
330 TestStartStop/group/no-preload/serial/SecondStart 280.24
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
334 TestStartStop/group/embed-certs/serial/Pause 3
336 TestStartStop/group/newest-cni/serial/FirstStart 33.18
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
339 TestStartStop/group/newest-cni/serial/Stop 1.31
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
341 TestStartStop/group/newest-cni/serial/SecondStart 15.68
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
345 TestStartStop/group/newest-cni/serial/Pause 3.22
346 TestNetworkPlugins/group/auto/Start 51.26
347 TestNetworkPlugins/group/auto/KubeletFlags 0.28
348 TestNetworkPlugins/group/auto/NetCatPod 10.28
349 TestNetworkPlugins/group/auto/DNS 0.18
350 TestNetworkPlugins/group/auto/Localhost 0.17
351 TestNetworkPlugins/group/auto/HairPin 0.18
352 TestNetworkPlugins/group/kindnet/Start 53.42
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
355 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
356 TestStartStop/group/no-preload/serial/Pause 3.9
357 TestNetworkPlugins/group/calico/Start 63.77
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
360 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
361 TestNetworkPlugins/group/kindnet/DNS 0.22
362 TestNetworkPlugins/group/kindnet/Localhost 0.19
363 TestNetworkPlugins/group/kindnet/HairPin 0.31
364 TestNetworkPlugins/group/custom-flannel/Start 59.78
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.34
367 TestNetworkPlugins/group/calico/NetCatPod 13.36
368 TestNetworkPlugins/group/calico/DNS 0.27
369 TestNetworkPlugins/group/calico/Localhost 0.27
370 TestNetworkPlugins/group/calico/HairPin 0.24
371 TestNetworkPlugins/group/enable-default-cni/Start 75.84
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
374 TestNetworkPlugins/group/custom-flannel/DNS 0.27
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
377 TestNetworkPlugins/group/flannel/Start 54.15
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.53
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.42
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
385 TestNetworkPlugins/group/flannel/NetCatPod 11.4
386 TestNetworkPlugins/group/bridge/Start 73.39
387 TestNetworkPlugins/group/flannel/DNS 0.18
388 TestNetworkPlugins/group/flannel/Localhost 0.17
389 TestNetworkPlugins/group/flannel/HairPin 0.26
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
391 TestNetworkPlugins/group/bridge/NetCatPod 11.27
392 TestNetworkPlugins/group/bridge/DNS 0.16
393 TestNetworkPlugins/group/bridge/Localhost 0.14
394 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (12.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-457703 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-457703 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.961959628s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1014 13:38:52.945807    7544 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1014 13:38:52.945887    7544 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-457703
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-457703: exit status 85 (67.227517ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-457703 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |          |
	|         | -p download-only-457703        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:40.030955    7549 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:40.031143    7549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:40.031155    7549 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:40.031161    7549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:40.031473    7549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	W1014 13:38:40.031656    7549 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19790-2228/.minikube/config/config.json: open /home/jenkins/minikube-integration/19790-2228/.minikube/config/config.json: no such file or directory
	I1014 13:38:40.032153    7549 out.go:352] Setting JSON to true
	I1014 13:38:40.033086    7549 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1271,"bootTime":1728911849,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:38:40.033173    7549 start.go:139] virtualization:  
	I1014 13:38:40.035482    7549 out.go:97] [download-only-457703] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1014 13:38:40.035718    7549 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 13:38:40.035776    7549 notify.go:220] Checking for updates...
	I1014 13:38:40.037467    7549 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:40.038877    7549 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:40.040242    7549 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:38:40.041820    7549 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:38:40.043445    7549 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 13:38:40.046298    7549 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:40.046569    7549 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:40.066616    7549 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:38:40.066728    7549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:40.415404    7549 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:38:40.405742139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:40.415514    7549 docker.go:318] overlay module found
	I1014 13:38:40.416785    7549 out.go:97] Using the docker driver based on user configuration
	I1014 13:38:40.416814    7549 start.go:297] selected driver: docker
	I1014 13:38:40.416821    7549 start.go:901] validating driver "docker" against <nil>
	I1014 13:38:40.416923    7549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:40.475313    7549 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:38:40.465947138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:40.475511    7549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:40.475827    7549 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1014 13:38:40.476000    7549 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:40.477783    7549 out.go:169] Using Docker driver with root privileges
	I1014 13:38:40.479262    7549 cni.go:84] Creating CNI manager for ""
	I1014 13:38:40.479417    7549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:38:40.479450    7549 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:40.479546    7549 start.go:340] cluster config:
	{Name:download-only-457703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-457703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:40.480950    7549 out.go:97] Starting "download-only-457703" primary control-plane node in "download-only-457703" cluster
	I1014 13:38:40.480972    7549 cache.go:121] Beginning downloading kic base image for docker with crio
	I1014 13:38:40.482178    7549 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:38:40.482201    7549 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 13:38:40.482341    7549 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:38:40.497589    7549 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:40.497787    7549 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:38:40.497897    7549 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:40.559959    7549 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1014 13:38:40.559985    7549 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:40.560144    7549 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 13:38:40.561830    7549 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 13:38:40.561851    7549 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1014 13:38:40.654296    7549 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1014 13:38:45.761159    7549 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1014 13:38:45.761261    7549 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1014 13:38:46.831499    7549 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 13:38:46.831865    7549 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/download-only-457703/config.json ...
	I1014 13:38:46.831897    7549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/download-only-457703/config.json: {Name:mka6651e53e7d8febdded4edccf58fb372f97b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:38:46.832075    7549 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 13:38:46.832248    7549 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19790-2228/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-457703 host does not exist
	  To start a cluster, run: "minikube start -p download-only-457703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-457703
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-347934 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-347934 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.230270336s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1014 13:38:59.586927    7544 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1014 13:38:59.586967    7544 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-347934
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-347934: exit status 85 (70.16923ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-457703 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-457703        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-457703        | download-only-457703 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | -o=json --download-only        | download-only-347934 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-347934        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:53.398877    7747 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:53.399370    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:53.399396    7747 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:53.399414    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:53.399673    7747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:38:53.400063    7747 out.go:352] Setting JSON to true
	I1014 13:38:53.400786    7747 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1284,"bootTime":1728911849,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:38:53.400850    7747 start.go:139] virtualization:  
	I1014 13:38:53.402676    7747 out.go:97] [download-only-347934] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:38:53.402888    7747 notify.go:220] Checking for updates...
	I1014 13:38:53.403970    7747 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:53.405139    7747 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:53.406737    7747 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:38:53.407996    7747 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:38:53.409174    7747 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 13:38:53.411623    7747 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:53.411917    7747 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:53.446404    7747 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:38:53.446520    7747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:53.497192    7747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:38:53.488193528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:53.497300    7747 docker.go:318] overlay module found
	I1014 13:38:53.498995    7747 out.go:97] Using the docker driver based on user configuration
	I1014 13:38:53.499022    7747 start.go:297] selected driver: docker
	I1014 13:38:53.499029    7747 start.go:901] validating driver "docker" against <nil>
	I1014 13:38:53.499124    7747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:53.544573    7747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:38:53.535281847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:53.544813    7747 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:53.545102    7747 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1014 13:38:53.545258    7747 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:53.546823    7747 out.go:169] Using Docker driver with root privileges
	I1014 13:38:53.548135    7747 cni.go:84] Creating CNI manager for ""
	I1014 13:38:53.548202    7747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 13:38:53.548217    7747 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:53.548295    7747 start.go:340] cluster config:
	{Name:download-only-347934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-347934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:53.549521    7747 out.go:97] Starting "download-only-347934" primary control-plane node in "download-only-347934" cluster
	I1014 13:38:53.549538    7747 cache.go:121] Beginning downloading kic base image for docker with crio
	I1014 13:38:53.550671    7747 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:38:53.550693    7747 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:53.550846    7747 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:38:53.565909    7747 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:53.566055    7747 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:38:53.566079    7747 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1014 13:38:53.566085    7747 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1014 13:38:53.566093    7747 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1014 13:38:53.606044    7747 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1014 13:38:53.606069    7747 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:53.606234    7747 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:53.608686    7747 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1014 13:38:53.608705    7747 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1014 13:38:53.687169    7747 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1014 13:38:58.073793    7747 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1014 13:38:58.073905    7747 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19790-2228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-347934 host does not exist
	  To start a cluster, run: "minikube start -p download-only-347934"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-347934
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 13:39:00.837268    7544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-893512 --alsologtostderr --binary-mirror http://127.0.0.1:35277 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-893512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-893512
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-002422
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-002422: exit status 85 (74.677554ms)

                                                
                                                
-- stdout --
	* Profile "addons-002422" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-002422"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-002422
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-002422: exit status 85 (72.079528ms)

                                                
                                                
-- stdout --
	* Profile "addons-002422" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-002422"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (171.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-002422 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-002422 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m51.72959756s)
--- PASS: TestAddons/Setup (171.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-002422 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-002422 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (11.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-002422 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-002422 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bea2f880-3886-4eff-bebf-e74127253bba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bea2f880-3886-4eff-bebf-e74127253bba] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 11.004150799s
addons_test.go:633: (dbg) Run:  kubectl --context addons-002422 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-002422 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-002422 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-002422 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (11.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.590787ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-ddkrt" [091b0f03-dc90-4b2b-bbd3-c73a13edd832] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003239544s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wjht4" [7f1138a2-5ec8-4c04-a3b7-fdb6c0af33aa] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003497597s
addons_test.go:331: (dbg) Run:  kubectl --context addons-002422 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-002422 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-002422 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.553286948s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 ip
2024/10/14 13:42:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fm5p5" [9a8d4cbd-cc13-4454-8daf-2bd7b77e9b2b] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004129208s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable inspektor-gadget --alsologtostderr -v=1: (5.701598676s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 13:42:31.462148    7544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 13:42:31.470724    7544 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 13:42:31.470760    7544 kapi.go:107] duration metric: took 8.624639ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.635429ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-002422 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-002422 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e17d1a1c-7764-434a-bd56-d695a3308fd3] Pending
helpers_test.go:344: "task-pv-pod" [e17d1a1c-7764-434a-bd56-d695a3308fd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e17d1a1c-7764-434a-bd56-d695a3308fd3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003709596s
addons_test.go:511: (dbg) Run:  kubectl --context addons-002422 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-002422 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-002422 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-002422 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-002422 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-002422 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-002422 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6f729471-bf86-4bbb-9f42-934c8720c9a3] Pending
helpers_test.go:344: "task-pv-pod-restore" [6f729471-bf86-4bbb-9f42-934c8720c9a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6f729471-bf86-4bbb-9f42-934c8720c9a3] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003835828s
addons_test.go:553: (dbg) Run:  kubectl --context addons-002422 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-002422 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-002422 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable volumesnapshots --alsologtostderr -v=1: (1.011265148s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.780371322s)
--- PASS: TestAddons/parallel/CSI (53.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-002422 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-m99zb" [e564b2af-2f3a-4e61-9c42-9bddfff5e73f] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-m99zb" [e564b2af-2f3a-4e61-9c42-9bddfff5e73f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-m99zb" [e564b2af-2f3a-4e61-9c42-9bddfff5e73f] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003450757s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable headlamp --alsologtostderr -v=1: (6.049606893s)
--- PASS: TestAddons/parallel/Headlamp (17.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-fwt5t" [86f73907-eaae-4e0f-a065-402b32cc3a03] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003622686s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-002422 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-002422 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c438b292-11c3-4eb8-92f1-314654602c8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c438b292-11c3-4eb8-92f1-314654602c8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c438b292-11c3-4eb8-92f1-314654602c8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003305951s
addons_test.go:902: (dbg) Run:  kubectl --context addons-002422 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 ssh "cat /opt/local-path-provisioner/pvc-89f2f068-92eb-4538-ac74-ca3f5159b907_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-002422 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-002422 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.204544385s)
--- PASS: TestAddons/parallel/LocalPath (52.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tnngr" [a113dbce-1d95-437b-83fc-dd34499d10e4] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003669997s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qgctz" [1587b9a8-d9f3-42af-ad80-25669061969f] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003516709s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-002422 addons disable yakd --alsologtostderr -v=1: (5.704591477s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-002422
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-002422: (11.898070863s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-002422
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-002422
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-002422
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (38.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-367007 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-367007 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.429472185s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-367007 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-367007 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-367007 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-367007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-367007
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-367007: (1.957134238s)
--- PASS: TestCertOptions (38.07s)

                                                
                                    
x
+
TestCertExpiration (240.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-250124 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-250124 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.766113107s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-250124 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-250124 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.270447714s)
helpers_test.go:175: Cleaning up "cert-expiration-250124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-250124
E1014 14:31:54.035529    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-250124: (2.432048364s)
--- PASS: TestCertExpiration (240.47s)

                                                
                                    
x
+
TestForceSystemdFlag (41.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-563719 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-563719 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.395480883s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-563719 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-563719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-563719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-563719: (2.517930275s)
--- PASS: TestForceSystemdFlag (41.34s)

                                                
                                    
x
+
TestForceSystemdEnv (40.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-509530 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-509530 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.783379713s)
helpers_test.go:175: Cleaning up "force-systemd-env-509530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-509530
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-509530: (2.610238609s)
--- PASS: TestForceSystemdEnv (40.39s)

                                                
                                    
x
+
TestErrorSpam/setup (31.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-561869 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-561869 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-561869 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-561869 --driver=docker  --container-runtime=crio: (31.853220084s)
--- PASS: TestErrorSpam/setup (31.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 stop: (1.275410212s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561869 --log_dir /tmp/nospam-561869 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19790-2228/.minikube/files/etc/test/nested/copy/7544/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-606999 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.592593139s)
--- PASS: TestFunctional/serial/StartWithProxy (47.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 13:50:07.057430    7544 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-606999 --alsologtostderr -v=8: (29.438396239s)
functional_test.go:663: soft start took 29.441682369s for "functional-606999" cluster.
I1014 13:50:36.496140    7544 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-606999 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:3.1: (1.346815526s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:3.3: (1.523131277s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 cache add registry.k8s.io/pause:latest: (1.271682898s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-606999 /tmp/TestFunctionalserialCacheCmdcacheadd_local3007073403/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache add minikube-local-cache-test:functional-606999
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache delete minikube-local-cache-test:functional-606999
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-606999
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.342587ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 cache reload: (1.256925464s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 kubectl -- --context functional-606999 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-606999 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-606999 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.22872488s)
functional_test.go:761: restart took 36.228829791s for "functional-606999" cluster.
I1014 13:51:21.395116    7544 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-606999 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 logs: (1.653813034s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 logs --file /tmp/TestFunctionalserialLogsFileCmd1791254946/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 logs --file /tmp/TestFunctionalserialLogsFileCmd1791254946/001/logs.txt: (1.644210757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-606999 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-606999
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-606999: exit status 115 (655.435449ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32500 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-606999 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 config get cpus: exit status 14 (84.533042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 config get cpus: exit status 14 (71.855702ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-606999 --alsologtostderr -v=1]
E1014 13:52:04.297088    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-606999 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 35019: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-606999 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.104018ms)

                                                
                                                
-- stdout --
	* [functional-606999] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:52:02.777189   34780 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:52:02.777356   34780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:52:02.777369   34780 out.go:358] Setting ErrFile to fd 2...
	I1014 13:52:02.777375   34780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:52:02.777646   34780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:52:02.778034   34780 out.go:352] Setting JSON to false
	I1014 13:52:02.778904   34780 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2074,"bootTime":1728911849,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:52:02.778985   34780 start.go:139] virtualization:  
	I1014 13:52:02.781977   34780 out.go:177] * [functional-606999] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:52:02.785283   34780 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:52:02.785402   34780 notify.go:220] Checking for updates...
	I1014 13:52:02.790429   34780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:52:02.792978   34780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:52:02.795719   34780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:52:02.798346   34780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:52:02.800943   34780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:52:02.804033   34780 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:52:02.804559   34780 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:52:02.825223   34780 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:52:02.825349   34780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:52:02.884273   34780 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:52:02.874373879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:52:02.884387   34780 docker.go:318] overlay module found
	I1014 13:52:02.887285   34780 out.go:177] * Using the docker driver based on existing profile
	I1014 13:52:02.890116   34780 start.go:297] selected driver: docker
	I1014 13:52:02.890136   34780 start.go:901] validating driver "docker" against &{Name:functional-606999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-606999 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:52:02.890254   34780 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:52:02.893491   34780 out.go:201] 
	W1014 13:52:02.896358   34780 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 13:52:02.899153   34780 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-606999 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-606999 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.837708ms)

                                                
                                                
-- stdout --
	* [functional-606999] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:52:02.595579   34734 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:52:02.595785   34734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:52:02.595811   34734 out.go:358] Setting ErrFile to fd 2...
	I1014 13:52:02.595829   34734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:52:02.596242   34734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:52:02.596641   34734 out.go:352] Setting JSON to false
	I1014 13:52:02.597591   34734 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2073,"bootTime":1728911849,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 13:52:02.597692   34734 start.go:139] virtualization:  
	I1014 13:52:02.600776   34734 out.go:177] * [functional-606999] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1014 13:52:02.603408   34734 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:52:02.603528   34734 notify.go:220] Checking for updates...
	I1014 13:52:02.608154   34734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:52:02.610409   34734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 13:52:02.612937   34734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 13:52:02.615289   34734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:52:02.617616   34734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:52:02.620653   34734 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:52:02.621277   34734 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:52:02.652198   34734 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:52:02.652325   34734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:52:02.703744   34734 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:52:02.693751511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:52:02.703851   34734 docker.go:318] overlay module found
	I1014 13:52:02.708101   34734 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1014 13:52:02.710570   34734 start.go:297] selected driver: docker
	I1014 13:52:02.710590   34734 start.go:901] validating driver "docker" against &{Name:functional-606999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-606999 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:52:02.710706   34734 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:52:02.713872   34734 out.go:201] 
	W1014 13:52:02.716491   34734 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 13:52:02.719306   34734 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-606999 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-606999 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vwncd" [f3098de9-3f7d-4c9a-a087-495c3f97613f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vwncd" [f3098de9-3f7d-4c9a-a087-495c3f97613f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003681582s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30618
functional_test.go:1675: http://192.168.49.2:30618: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-vwncd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30618
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [77ec0180-1b8b-442c-bb7b-e400041b75d1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00347344s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-606999 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-606999 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-606999 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-606999 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9aa6e58b-9e3c-4424-b459-ded2dd979066] Pending
helpers_test.go:344: "sp-pod" [9aa6e58b-9e3c-4424-b459-ded2dd979066] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9aa6e58b-9e3c-4424-b459-ded2dd979066] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003387672s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-606999 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-606999 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-606999 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08fba5a8-ed13-4237-87bd-455d40ceb087] Pending
helpers_test.go:344: "sp-pod" [08fba5a8-ed13-4237-87bd-455d40ceb087] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [08fba5a8-ed13-4237-87bd-455d40ceb087] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003547335s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-606999 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh -n functional-606999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cp functional-606999:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2652594105/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh -n functional-606999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh -n functional-606999 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7544/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /etc/test/nested/copy/7544/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /etc/ssl/certs/7544.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /usr/share/ca-certificates/7544.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /etc/ssl/certs/75442.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /usr/share/ca-certificates/75442.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-606999 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "sudo systemctl is-active docker": exit status 1 (282.278708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "sudo systemctl is-active containerd": exit status 1 (266.963603ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 32496: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-606999 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [263efdd8-95b8-478f-aed4-5c23bc164759] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [263efdd8-95b8-478f-aed4-5c23bc164759] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003924482s
I1014 13:51:39.529483    7544 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-606999 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.94.32 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-606999 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-606999 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-606999 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-xn567" [3750298b-de27-4129-8f42-34907992693b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-xn567" [3750298b-de27-4129-8f42-34907992693b] Running
E1014 13:51:54.037495    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.043951    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.055427    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.076920    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.118390    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.199920    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.361377    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:54.683060    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:55.324443    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:56.606594    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00434242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "329.705352ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "95.203619ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E1014 13:51:59.174500    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "378.115328ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.891094ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdany-port2633381567/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728913919364622149" to /tmp/TestFunctionalparallelMountCmdany-port2633381567/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728913919364622149" to /tmp/TestFunctionalparallelMountCmdany-port2633381567/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728913919364622149" to /tmp/TestFunctionalparallelMountCmdany-port2633381567/001/test-1728913919364622149
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (429.043522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:51:59.794027    7544 retry.go:31] will retry after 698.47367ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 test-1728913919364622149
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh cat /mount-9p/test-1728913919364622149
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-606999 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c0968774-b488-4c4a-96da-3da77c14c6f9] Pending
helpers_test.go:344: "busybox-mount" [c0968774-b488-4c4a-96da-3da77c14c6f9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c0968774-b488-4c4a-96da-3da77c14c6f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c0968774-b488-4c4a-96da-3da77c14c6f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004659245s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-606999 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdany-port2633381567/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service list -o json
functional_test.go:1494: Took "642.776083ms" to run "out/minikube-linux-arm64 -p functional-606999 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31182
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31182
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdspecific-port502473884/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.51286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:52:09.454828    7544 retry.go:31] will retry after 432.615355ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdspecific-port502473884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "sudo umount -f /mount-9p": exit status 1 (361.905126ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-606999 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdspecific-port502473884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T" /mount1: exit status 1 (900.116902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:52:11.999763    7544 retry.go:31] will retry after 688.484251ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T" /mount2
2024/10/14 13:52:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-606999 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-606999 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891080729/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 version -o=json --components: (1.266467949s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-606999 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-606999
localhost/kicbase/echo-server:functional-606999
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-606999 image ls --format short --alsologtostderr:
I1014 13:52:20.847840   37626 out.go:345] Setting OutFile to fd 1 ...
I1014 13:52:20.847969   37626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:20.847980   37626 out.go:358] Setting ErrFile to fd 2...
I1014 13:52:20.847985   37626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:20.848230   37626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
I1014 13:52:20.849008   37626 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:20.849170   37626 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:20.850051   37626 cli_runner.go:164] Run: docker container inspect functional-606999 --format={{.State.Status}}
I1014 13:52:20.878283   37626 ssh_runner.go:195] Run: systemctl --version
I1014 13:52:20.878334   37626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-606999
I1014 13:52:20.898537   37626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/functional-606999/id_rsa Username:docker}
I1014 13:52:20.996863   37626 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-606999 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| localhost/kicbase/echo-server           | functional-606999  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/minikube-local-cache-test     | functional-606999  | 9eaf0786c2b43 | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-606999 image ls --format table --alsologtostderr:
I1014 13:52:21.411447   37778 out.go:345] Setting OutFile to fd 1 ...
I1014 13:52:21.411647   37778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.411676   37778 out.go:358] Setting ErrFile to fd 2...
I1014 13:52:21.411697   37778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.412082   37778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
I1014 13:52:21.413181   37778 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.413388   37778 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.414216   37778 cli_runner.go:164] Run: docker container inspect functional-606999 --format={{.State.Status}}
I1014 13:52:21.438957   37778 ssh_runner.go:195] Run: systemctl --version
I1014 13:52:21.439015   37778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-606999
I1014 13:52:21.460490   37778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/functional-606999/id_rsa Username:docker}
I1014 13:52:21.556870   37778 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-606999 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"]
,"size":"52254450"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d
0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-606999"],"size":"4788229"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb4
0c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.
4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a68
7ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"9eaf0786c2b43405ce63de1facd1480ab6fe3a562dfa878cbf7da0a0d2691d95","repoDigests":["localhost/minikube-local-cache-test@sha256:b17739431fa224b33434cd3655ad86e119ab3b345f28caf1eb263c3bf10167a4"],"repoTags":["localhost/minikube-local-cache-test:functional-606999"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"13991
2446"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-606999 image ls --format json --alsologtostderr:
I1014 13:52:21.150091   37695 out.go:345] Setting OutFile to fd 1 ...
I1014 13:52:21.150226   37695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.150235   37695 out.go:358] Setting ErrFile to fd 2...
I1014 13:52:21.150241   37695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.150511   37695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
I1014 13:52:21.151134   37695 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.151252   37695 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.151730   37695 cli_runner.go:164] Run: docker container inspect functional-606999 --format={{.State.Status}}
I1014 13:52:21.175089   37695 ssh_runner.go:195] Run: systemctl --version
I1014 13:52:21.175143   37695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-606999
I1014 13:52:21.200947   37695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/functional-606999/id_rsa Username:docker}
I1014 13:52:21.302880   37695 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-606999 image ls --format yaml --alsologtostderr:
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-606999
size: "4788229"
- id: 9eaf0786c2b43405ce63de1facd1480ab6fe3a562dfa878cbf7da0a0d2691d95
repoDigests:
- localhost/minikube-local-cache-test@sha256:b17739431fa224b33434cd3655ad86e119ab3b345f28caf1eb263c3bf10167a4
repoTags:
- localhost/minikube-local-cache-test:functional-606999
size: "3330"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-606999 image ls --format yaml --alsologtostderr:
I1014 13:52:20.827251   37627 out.go:345] Setting OutFile to fd 1 ...
I1014 13:52:20.827420   37627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:20.827429   37627 out.go:358] Setting ErrFile to fd 2...
I1014 13:52:20.827435   37627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:20.827686   37627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
I1014 13:52:20.828407   37627 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:20.828564   37627 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:20.829210   37627 cli_runner.go:164] Run: docker container inspect functional-606999 --format={{.State.Status}}
I1014 13:52:20.856206   37627 ssh_runner.go:195] Run: systemctl --version
I1014 13:52:20.856252   37627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-606999
I1014 13:52:20.884396   37627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/functional-606999/id_rsa Username:docker}
I1014 13:52:20.989003   37627 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-606999 ssh pgrep buildkitd: exit status 1 (327.831024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image build -t localhost/my-image:functional-606999 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 image build -t localhost/my-image:functional-606999 testdata/build --alsologtostderr: (3.056084105s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-606999 image build -t localhost/my-image:functional-606999 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9467c79c04e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-606999
--> 0aed1ffab95
Successfully tagged localhost/my-image:functional-606999
0aed1ffab95bf327eb673c863df5810aa0cf752b549c94fe06d0371e2aaca00b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-606999 image build -t localhost/my-image:functional-606999 testdata/build --alsologtostderr:
I1014 13:52:21.454179   37784 out.go:345] Setting OutFile to fd 1 ...
I1014 13:52:21.454406   37784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.454435   37784 out.go:358] Setting ErrFile to fd 2...
I1014 13:52:21.454458   37784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:52:21.454711   37784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
I1014 13:52:21.455396   37784 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.455982   37784 config.go:182] Loaded profile config "functional-606999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:52:21.456487   37784 cli_runner.go:164] Run: docker container inspect functional-606999 --format={{.State.Status}}
I1014 13:52:21.483614   37784 ssh_runner.go:195] Run: systemctl --version
I1014 13:52:21.483667   37784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-606999
I1014 13:52:21.502534   37784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/functional-606999/id_rsa Username:docker}
I1014 13:52:21.601536   37784 build_images.go:161] Building image from path: /tmp/build.3195377737.tar
I1014 13:52:21.601598   37784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 13:52:21.611045   37784 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3195377737.tar
I1014 13:52:21.614363   37784 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3195377737.tar: stat -c "%s %y" /var/lib/minikube/build/build.3195377737.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3195377737.tar': No such file or directory
I1014 13:52:21.614396   37784 ssh_runner.go:362] scp /tmp/build.3195377737.tar --> /var/lib/minikube/build/build.3195377737.tar (3072 bytes)
I1014 13:52:21.638564   37784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3195377737
I1014 13:52:21.647239   37784 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3195377737 -xf /var/lib/minikube/build/build.3195377737.tar
I1014 13:52:21.656216   37784 crio.go:315] Building image: /var/lib/minikube/build/build.3195377737
I1014 13:52:21.656300   37784 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-606999 /var/lib/minikube/build/build.3195377737 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1014 13:52:24.413191   37784 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-606999 /var/lib/minikube/build/build.3195377737 --cgroup-manager=cgroupfs: (2.756858422s)
I1014 13:52:24.413260   37784 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3195377737
I1014 13:52:24.422037   37784 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3195377737.tar
I1014 13:52:24.430658   37784 build_images.go:217] Built localhost/my-image:functional-606999 from /tmp/build.3195377737.tar
I1014 13:52:24.430686   37784 build_images.go:133] succeeded building to: functional-606999
I1014 13:52:24.430691   37784 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-606999
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image load --daemon kicbase/echo-server:functional-606999 --alsologtostderr
E1014 13:52:14.539330    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-606999 image load --daemon kicbase/echo-server:functional-606999 --alsologtostderr: (1.268928712s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image load --daemon kicbase/echo-server:functional-606999 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-606999
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image load --daemon kicbase/echo-server:functional-606999 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image save kicbase/echo-server:functional-606999 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image rm kicbase/echo-server:functional-606999 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-606999
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-606999 image save --daemon kicbase/echo-server:functional-606999 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-606999
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-606999
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-606999
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-606999
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-465285 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1014 13:52:35.021337    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:53:15.982703    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:54:37.904042    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-465285 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.275103727s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-465285 -- rollout status deployment/busybox: (5.234144753s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-2g749 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-kxxqc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-pgmnv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-2g749 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-kxxqc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-pgmnv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-2g749 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-kxxqc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-pgmnv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-2g749 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-2g749 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-kxxqc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-kxxqc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-pgmnv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-465285 -- exec busybox-7dff88458-pgmnv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-465285 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-465285 -v=7 --alsologtostderr: (34.83210534s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-465285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp testdata/cp-test.txt ha-465285:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3822547211/001/cp-test_ha-465285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285:/home/docker/cp-test.txt ha-465285-m02:/home/docker/cp-test_ha-465285_ha-465285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test_ha-465285_ha-465285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285:/home/docker/cp-test.txt ha-465285-m03:/home/docker/cp-test_ha-465285_ha-465285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test_ha-465285_ha-465285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285:/home/docker/cp-test.txt ha-465285-m04:/home/docker/cp-test_ha-465285_ha-465285-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test_ha-465285_ha-465285-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp testdata/cp-test.txt ha-465285-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3822547211/001/cp-test_ha-465285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m02:/home/docker/cp-test.txt ha-465285:/home/docker/cp-test_ha-465285-m02_ha-465285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test_ha-465285-m02_ha-465285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m02:/home/docker/cp-test.txt ha-465285-m03:/home/docker/cp-test_ha-465285-m02_ha-465285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test_ha-465285-m02_ha-465285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m02:/home/docker/cp-test.txt ha-465285-m04:/home/docker/cp-test_ha-465285-m02_ha-465285-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test_ha-465285-m02_ha-465285-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp testdata/cp-test.txt ha-465285-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3822547211/001/cp-test_ha-465285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m03:/home/docker/cp-test.txt ha-465285:/home/docker/cp-test_ha-465285-m03_ha-465285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test_ha-465285-m03_ha-465285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m03:/home/docker/cp-test.txt ha-465285-m02:/home/docker/cp-test_ha-465285-m03_ha-465285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test_ha-465285-m03_ha-465285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m03:/home/docker/cp-test.txt ha-465285-m04:/home/docker/cp-test_ha-465285-m03_ha-465285-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test_ha-465285-m03_ha-465285-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp testdata/cp-test.txt ha-465285-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3822547211/001/cp-test_ha-465285-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m04:/home/docker/cp-test.txt ha-465285:/home/docker/cp-test_ha-465285-m04_ha-465285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285 "sudo cat /home/docker/cp-test_ha-465285-m04_ha-465285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m04:/home/docker/cp-test.txt ha-465285-m02:/home/docker/cp-test_ha-465285-m04_ha-465285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m02 "sudo cat /home/docker/cp-test_ha-465285-m04_ha-465285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 cp ha-465285-m04:/home/docker/cp-test.txt ha-465285-m03:/home/docker/cp-test_ha-465285-m04_ha-465285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 ssh -n ha-465285-m03 "sudo cat /home/docker/cp-test_ha-465285-m04_ha-465285-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 node stop m02 -v=7 --alsologtostderr
E1014 13:56:31.036935    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.043362    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.054796    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.076235    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.117705    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.199148    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.360619    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:31.682323    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:32.323738    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:33.605377    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:36.166692    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-465285 node stop m02 -v=7 --alsologtostderr: (11.971726227s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr: exit status 7 (732.026373ms)

                                                
                                                
-- stdout --
	ha-465285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-465285-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-465285-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-465285-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:56:38.944673   53544 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:56:38.944917   53544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:56:38.944943   53544 out.go:358] Setting ErrFile to fd 2...
	I1014 13:56:38.944965   53544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:56:38.945253   53544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 13:56:38.945481   53544 out.go:352] Setting JSON to false
	I1014 13:56:38.945609   53544 mustload.go:65] Loading cluster: ha-465285
	I1014 13:56:38.945691   53544 notify.go:220] Checking for updates...
	I1014 13:56:38.946136   53544 config.go:182] Loaded profile config "ha-465285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:38.946183   53544 status.go:174] checking status of ha-465285 ...
	I1014 13:56:38.947208   53544 cli_runner.go:164] Run: docker container inspect ha-465285 --format={{.State.Status}}
	I1014 13:56:38.965023   53544 status.go:371] ha-465285 host status = "Running" (err=<nil>)
	I1014 13:56:38.965047   53544 host.go:66] Checking if "ha-465285" exists ...
	I1014 13:56:38.965353   53544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-465285
	I1014 13:56:39.002324   53544 host.go:66] Checking if "ha-465285" exists ...
	I1014 13:56:39.002640   53544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:56:39.002724   53544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-465285
	I1014 13:56:39.025610   53544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/ha-465285/id_rsa Username:docker}
	I1014 13:56:39.118569   53544 ssh_runner.go:195] Run: systemctl --version
	I1014 13:56:39.122847   53544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:39.134957   53544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:56:39.199215   53544 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-14 13:56:39.188725134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:56:39.199808   53544 kubeconfig.go:125] found "ha-465285" server: "https://192.168.49.254:8443"
	I1014 13:56:39.199876   53544 api_server.go:166] Checking apiserver status ...
	I1014 13:56:39.199927   53544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:39.211470   53544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	I1014 13:56:39.220681   53544 api_server.go:182] apiserver freezer: "5:freezer:/docker/ebfa428c93482d1031d69643da3be85dcda4898fa5506dd83d1d8d13b132d229/crio/crio-5e7009842f710c5cab426b099ec25b673db1510c7684269f867cc60cfea1f5bd"
	I1014 13:56:39.220852   53544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ebfa428c93482d1031d69643da3be85dcda4898fa5506dd83d1d8d13b132d229/crio/crio-5e7009842f710c5cab426b099ec25b673db1510c7684269f867cc60cfea1f5bd/freezer.state
	I1014 13:56:39.229718   53544 api_server.go:204] freezer state: "THAWED"
	I1014 13:56:39.229745   53544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 13:56:39.237387   53544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 13:56:39.237463   53544 status.go:463] ha-465285 apiserver status = Running (err=<nil>)
	I1014 13:56:39.237484   53544 status.go:176] ha-465285 status: &{Name:ha-465285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:56:39.237502   53544 status.go:174] checking status of ha-465285-m02 ...
	I1014 13:56:39.237798   53544 cli_runner.go:164] Run: docker container inspect ha-465285-m02 --format={{.State.Status}}
	I1014 13:56:39.254351   53544 status.go:371] ha-465285-m02 host status = "Stopped" (err=<nil>)
	I1014 13:56:39.254378   53544 status.go:384] host is not running, skipping remaining checks
	I1014 13:56:39.254385   53544 status.go:176] ha-465285-m02 status: &{Name:ha-465285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:56:39.254429   53544 status.go:174] checking status of ha-465285-m03 ...
	I1014 13:56:39.254756   53544 cli_runner.go:164] Run: docker container inspect ha-465285-m03 --format={{.State.Status}}
	I1014 13:56:39.272003   53544 status.go:371] ha-465285-m03 host status = "Running" (err=<nil>)
	I1014 13:56:39.272029   53544 host.go:66] Checking if "ha-465285-m03" exists ...
	I1014 13:56:39.272324   53544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-465285-m03
	I1014 13:56:39.292564   53544 host.go:66] Checking if "ha-465285-m03" exists ...
	I1014 13:56:39.292922   53544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:56:39.292978   53544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-465285-m03
	I1014 13:56:39.308391   53544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/ha-465285-m03/id_rsa Username:docker}
	I1014 13:56:39.397871   53544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:39.409845   53544 kubeconfig.go:125] found "ha-465285" server: "https://192.168.49.254:8443"
	I1014 13:56:39.409877   53544 api_server.go:166] Checking apiserver status ...
	I1014 13:56:39.409947   53544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:39.420414   53544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1328/cgroup
	I1014 13:56:39.430096   53544 api_server.go:182] apiserver freezer: "5:freezer:/docker/f23d1eabceae3e853ea1743bf33c1e19162b8fdc39bf774817e6a58001caa90c/crio/crio-7ef1b58d757903f6edbb773b0c7acdd495ab26f007f328f1d719660d022f2a39"
	I1014 13:56:39.430176   53544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f23d1eabceae3e853ea1743bf33c1e19162b8fdc39bf774817e6a58001caa90c/crio/crio-7ef1b58d757903f6edbb773b0c7acdd495ab26f007f328f1d719660d022f2a39/freezer.state
	I1014 13:56:39.438665   53544 api_server.go:204] freezer state: "THAWED"
	I1014 13:56:39.438693   53544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 13:56:39.446451   53544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 13:56:39.446527   53544 status.go:463] ha-465285-m03 apiserver status = Running (err=<nil>)
	I1014 13:56:39.446550   53544 status.go:176] ha-465285-m03 status: &{Name:ha-465285-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:56:39.446605   53544 status.go:174] checking status of ha-465285-m04 ...
	I1014 13:56:39.446961   53544 cli_runner.go:164] Run: docker container inspect ha-465285-m04 --format={{.State.Status}}
	I1014 13:56:39.463843   53544 status.go:371] ha-465285-m04 host status = "Running" (err=<nil>)
	I1014 13:56:39.463880   53544 host.go:66] Checking if "ha-465285-m04" exists ...
	I1014 13:56:39.464165   53544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-465285-m04
	I1014 13:56:39.488310   53544 host.go:66] Checking if "ha-465285-m04" exists ...
	I1014 13:56:39.488966   53544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:56:39.489016   53544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-465285-m04
	I1014 13:56:39.508093   53544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/ha-465285-m04/id_rsa Username:docker}
	I1014 13:56:39.597675   53544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:39.613002   53544 status.go:176] ha-465285-m04 status: &{Name:ha-465285-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 node start m02 -v=7 --alsologtostderr
E1014 13:56:41.288070    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:51.529628    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:54.035731    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-465285 node start m02 -v=7 --alsologtostderr: (19.971878199s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr: (1.446376167s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.415858642s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-465285 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-465285 -v=7 --alsologtostderr
E1014 13:57:12.011812    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:57:21.745962    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-465285 -v=7 --alsologtostderr: (37.161874685s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-465285 --wait=true -v=7 --alsologtostderr
E1014 13:57:52.973903    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:59:14.895376    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-465285 --wait=true -v=7 --alsologtostderr: (2m8.533909516s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-465285
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-465285 node delete m03 -v=7 --alsologtostderr: (11.704235719s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-465285 stop -v=7 --alsologtostderr: (35.706728711s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr: exit status 7 (117.00593ms)

                                                
                                                
-- stdout --
	ha-465285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-465285-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-465285-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:00:38.418303   67432 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:00:38.418515   67432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:00:38.418547   67432 out.go:358] Setting ErrFile to fd 2...
	I1014 14:00:38.418567   67432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:00:38.418923   67432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 14:00:38.419186   67432 out.go:352] Setting JSON to false
	I1014 14:00:38.419253   67432 mustload.go:65] Loading cluster: ha-465285
	I1014 14:00:38.419991   67432 config.go:182] Loaded profile config "ha-465285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:00:38.420041   67432 status.go:174] checking status of ha-465285 ...
	I1014 14:00:38.420875   67432 cli_runner.go:164] Run: docker container inspect ha-465285 --format={{.State.Status}}
	I1014 14:00:38.422057   67432 notify.go:220] Checking for updates...
	I1014 14:00:38.438032   67432 status.go:371] ha-465285 host status = "Stopped" (err=<nil>)
	I1014 14:00:38.438060   67432 status.go:384] host is not running, skipping remaining checks
	I1014 14:00:38.438068   67432 status.go:176] ha-465285 status: &{Name:ha-465285 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:00:38.438094   67432 status.go:174] checking status of ha-465285-m02 ...
	I1014 14:00:38.438428   67432 cli_runner.go:164] Run: docker container inspect ha-465285-m02 --format={{.State.Status}}
	I1014 14:00:38.454691   67432 status.go:371] ha-465285-m02 host status = "Stopped" (err=<nil>)
	I1014 14:00:38.454715   67432 status.go:384] host is not running, skipping remaining checks
	I1014 14:00:38.454721   67432 status.go:176] ha-465285-m02 status: &{Name:ha-465285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:00:38.454746   67432 status.go:174] checking status of ha-465285-m04 ...
	I1014 14:00:38.455045   67432 cli_runner.go:164] Run: docker container inspect ha-465285-m04 --format={{.State.Status}}
	I1014 14:00:38.481848   67432 status.go:371] ha-465285-m04 host status = "Stopped" (err=<nil>)
	I1014 14:00:38.481873   67432 status.go:384] host is not running, skipping remaining checks
	I1014 14:00:38.481879   67432 status.go:176] ha-465285-m04 status: &{Name:ha-465285-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (72.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-465285 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1014 14:01:31.036974    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-465285 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.516175722s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (72.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-465285 --control-plane -v=7 --alsologtostderr
E1014 14:01:54.036165    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:01:58.737012    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-465285 --control-plane -v=7 --alsologtostderr: (1m12.833919784s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-465285 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-409593 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-409593 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m19.524158867s)
--- PASS: TestJSONOutput/start/Command (79.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-409593 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-409593 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-409593 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-409593 --output=json --user=testUser: (5.868734063s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-725828 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-725828 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.746668ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd1e443c-83ef-4761-915d-27ebf9a4d49d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-725828] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b154c8b6-ae60-48fe-8759-f15d89fcfe4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"4baee95d-fc22-4eb7-b892-41e02f35a0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc2adaf9-f926-47fc-b31c-31c9a41dbba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig"}}
	{"specversion":"1.0","id":"80e653b6-c730-4dc6-b209-e3fe9a23a73c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube"}}
	{"specversion":"1.0","id":"06ac7d27-a94d-4120-b472-725e12e21833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1237f171-65a2-49aa-a03b-8d9727980f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"48ecf5a0-b156-4f50-b734-5104a5b553b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-725828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-725828
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-070407 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-070407 --network=: (34.834021324s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-070407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-070407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-070407: (2.130994882s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-667692 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-667692 --network=bridge: (29.837911554s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-667692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-667692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-667692: (2.002490666s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.86s)

                                                
                                    
x
+
TestKicExistingNetwork (32.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1014 14:05:55.587886    7544 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1014 14:05:55.603325    7544 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1014 14:05:55.603404    7544 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1014 14:05:55.603421    7544 cli_runner.go:164] Run: docker network inspect existing-network
W1014 14:05:55.623074    7544 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1014 14:05:55.623104    7544 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1014 14:05:55.623116    7544 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1014 14:05:55.623217    7544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 14:05:55.641155    7544 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96afa958ff30 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:63:6f:51:ae} reservation:<nil>}
I1014 14:05:55.641528    7544 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e4dc60}
I1014 14:05:55.641555    7544 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1014 14:05:55.641606    7544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1014 14:05:55.708028    7544 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-053623 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-053623 --network=existing-network: (29.971133349s)
helpers_test.go:175: Cleaning up "existing-network-053623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-053623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-053623: (1.933975043s)
I1014 14:06:27.628759    7544 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.06s)

                                                
                                    
x
+
TestKicCustomSubnet (34.07s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-005686 --subnet=192.168.60.0/24
E1014 14:06:31.036469    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:06:54.036880    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-005686 --subnet=192.168.60.0/24: (31.908373458s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-005686 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-005686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-005686
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-005686: (2.14023496s)
--- PASS: TestKicCustomSubnet (34.07s)

                                                
                                    
x
+
TestKicStaticIP (35.65s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-690495 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-690495 --static-ip=192.168.200.200: (33.44549973s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-690495 ip
helpers_test.go:175: Cleaning up "static-ip-690495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-690495
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-690495: (2.056221339s)
--- PASS: TestKicStaticIP (35.65s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-777128 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-777128 --driver=docker  --container-runtime=crio: (30.970238556s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-779500 --driver=docker  --container-runtime=crio
E1014 14:08:17.109767    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-779500 --driver=docker  --container-runtime=crio: (32.542761532s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-777128
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-779500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-779500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-779500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-779500: (1.915442734s)
helpers_test.go:175: Cleaning up "first-777128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-777128
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-777128: (2.216109004s)
--- PASS: TestMinikubeProfile (68.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-368322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-368322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.651337422s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-368322 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-370570 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-370570 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.400276499s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-370570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-368322 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-368322 --alsologtostderr -v=5: (1.642754881s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-370570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-370570
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-370570: (1.203431804s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-370570
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-370570: (6.486374122s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-370570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.646288565s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-436076 -- rollout status deployment/busybox: (5.720005229s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-4qrwk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-bcx7z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-4qrwk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-bcx7z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-4qrwk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-bcx7z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-4qrwk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-4qrwk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-bcx7z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436076 -- exec busybox-7dff88458-bcx7z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-436076 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-436076 -v 3 --alsologtostderr: (30.057050464s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-436076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp testdata/cp-test.txt multinode-436076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile243361357/001/cp-test_multinode-436076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076:/home/docker/cp-test.txt multinode-436076-m02:/home/docker/cp-test_multinode-436076_multinode-436076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test_multinode-436076_multinode-436076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076:/home/docker/cp-test.txt multinode-436076-m03:/home/docker/cp-test_multinode-436076_multinode-436076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test_multinode-436076_multinode-436076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp testdata/cp-test.txt multinode-436076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile243361357/001/cp-test_multinode-436076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m02:/home/docker/cp-test.txt multinode-436076:/home/docker/cp-test_multinode-436076-m02_multinode-436076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test_multinode-436076-m02_multinode-436076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m02:/home/docker/cp-test.txt multinode-436076-m03:/home/docker/cp-test_multinode-436076-m02_multinode-436076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test_multinode-436076-m02_multinode-436076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp testdata/cp-test.txt multinode-436076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile243361357/001/cp-test_multinode-436076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m03:/home/docker/cp-test.txt multinode-436076:/home/docker/cp-test_multinode-436076-m03_multinode-436076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076 "sudo cat /home/docker/cp-test_multinode-436076-m03_multinode-436076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 cp multinode-436076-m03:/home/docker/cp-test.txt multinode-436076-m02:/home/docker/cp-test_multinode-436076-m03_multinode-436076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 ssh -n multinode-436076-m02 "sudo cat /home/docker/cp-test_multinode-436076-m03_multinode-436076-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-436076 node stop m03: (1.222144162s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436076 status: exit status 7 (478.694147ms)

                                                
                                                
-- stdout --
	multinode-436076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-436076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-436076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr: exit status 7 (480.464753ms)

                                                
                                                
-- stdout --
	multinode-436076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-436076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-436076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:11:22.022557  120618 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:11:22.022681  120618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:11:22.022691  120618 out.go:358] Setting ErrFile to fd 2...
	I1014 14:11:22.022697  120618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:11:22.022928  120618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 14:11:22.023106  120618 out.go:352] Setting JSON to false
	I1014 14:11:22.023152  120618 mustload.go:65] Loading cluster: multinode-436076
	I1014 14:11:22.023249  120618 notify.go:220] Checking for updates...
	I1014 14:11:22.023576  120618 config.go:182] Loaded profile config "multinode-436076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:11:22.023592  120618 status.go:174] checking status of multinode-436076 ...
	I1014 14:11:22.024200  120618 cli_runner.go:164] Run: docker container inspect multinode-436076 --format={{.State.Status}}
	I1014 14:11:22.043720  120618 status.go:371] multinode-436076 host status = "Running" (err=<nil>)
	I1014 14:11:22.043745  120618 host.go:66] Checking if "multinode-436076" exists ...
	I1014 14:11:22.044073  120618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-436076
	I1014 14:11:22.067882  120618 host.go:66] Checking if "multinode-436076" exists ...
	I1014 14:11:22.068184  120618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:11:22.068234  120618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-436076
	I1014 14:11:22.086881  120618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/multinode-436076/id_rsa Username:docker}
	I1014 14:11:22.177835  120618 ssh_runner.go:195] Run: systemctl --version
	I1014 14:11:22.181862  120618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:11:22.193904  120618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:11:22.244188  120618 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-14 14:11:22.234665903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:11:22.244811  120618 kubeconfig.go:125] found "multinode-436076" server: "https://192.168.67.2:8443"
	I1014 14:11:22.244842  120618 api_server.go:166] Checking apiserver status ...
	I1014 14:11:22.244884  120618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:11:22.255804  120618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	I1014 14:11:22.265080  120618 api_server.go:182] apiserver freezer: "5:freezer:/docker/5cac09c9283e24cd0722beea56313535a2f478b23184d763e68ca3d962ce6cea/crio/crio-5a1b5b7239f281c835c66665f790c22040e4f4020821a5d1d53c7508cc9cd6df"
	I1014 14:11:22.265164  120618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5cac09c9283e24cd0722beea56313535a2f478b23184d763e68ca3d962ce6cea/crio/crio-5a1b5b7239f281c835c66665f790c22040e4f4020821a5d1d53c7508cc9cd6df/freezer.state
	I1014 14:11:22.274460  120618 api_server.go:204] freezer state: "THAWED"
	I1014 14:11:22.274489  120618 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1014 14:11:22.282128  120618 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1014 14:11:22.282154  120618 status.go:463] multinode-436076 apiserver status = Running (err=<nil>)
	I1014 14:11:22.282164  120618 status.go:176] multinode-436076 status: &{Name:multinode-436076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:11:22.282202  120618 status.go:174] checking status of multinode-436076-m02 ...
	I1014 14:11:22.282516  120618 cli_runner.go:164] Run: docker container inspect multinode-436076-m02 --format={{.State.Status}}
	I1014 14:11:22.298304  120618 status.go:371] multinode-436076-m02 host status = "Running" (err=<nil>)
	I1014 14:11:22.298328  120618 host.go:66] Checking if "multinode-436076-m02" exists ...
	I1014 14:11:22.298624  120618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-436076-m02
	I1014 14:11:22.313713  120618 host.go:66] Checking if "multinode-436076-m02" exists ...
	I1014 14:11:22.314035  120618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:11:22.314077  120618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-436076-m02
	I1014 14:11:22.330261  120618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19790-2228/.minikube/machines/multinode-436076-m02/id_rsa Username:docker}
	I1014 14:11:22.418056  120618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:11:22.429497  120618 status.go:176] multinode-436076-m02 status: &{Name:multinode-436076-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:11:22.429532  120618 status.go:174] checking status of multinode-436076-m03 ...
	I1014 14:11:22.429876  120618 cli_runner.go:164] Run: docker container inspect multinode-436076-m03 --format={{.State.Status}}
	I1014 14:11:22.446713  120618 status.go:371] multinode-436076-m03 host status = "Stopped" (err=<nil>)
	I1014 14:11:22.446738  120618 status.go:384] host is not running, skipping remaining checks
	I1014 14:11:22.446745  120618 status.go:176] multinode-436076-m03 status: &{Name:multinode-436076-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 node start m03 -v=7 --alsologtostderr
E1014 14:11:31.036800    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-436076 node start m03 -v=7 --alsologtostderr: (9.553120727s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (112.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436076
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-436076
E1014 14:11:54.036257    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-436076: (24.79630047s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436076 --wait=true -v=8 --alsologtostderr
E1014 14:12:54.098482    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436076 --wait=true -v=8 --alsologtostderr: (1m27.8996131s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436076
--- PASS: TestMultiNode/serial/RestartKeepsNodes (112.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-436076 node delete m03: (4.832977137s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-436076 stop: (23.636146931s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436076 status: exit status 7 (94.292664ms)

                                                
                                                
-- stdout --
	multinode-436076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-436076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr: exit status 7 (95.30378ms)

                                                
                                                
-- stdout --
	multinode-436076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-436076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:13:54.837629  128429 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:13:54.837776  128429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:13:54.837788  128429 out.go:358] Setting ErrFile to fd 2...
	I1014 14:13:54.837794  128429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:13:54.838184  128429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 14:13:54.838447  128429 out.go:352] Setting JSON to false
	I1014 14:13:54.838496  128429 mustload.go:65] Loading cluster: multinode-436076
	I1014 14:13:54.839264  128429 notify.go:220] Checking for updates...
	I1014 14:13:54.839648  128429 config.go:182] Loaded profile config "multinode-436076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:13:54.839843  128429 status.go:174] checking status of multinode-436076 ...
	I1014 14:13:54.840481  128429 cli_runner.go:164] Run: docker container inspect multinode-436076 --format={{.State.Status}}
	I1014 14:13:54.857127  128429 status.go:371] multinode-436076 host status = "Stopped" (err=<nil>)
	I1014 14:13:54.857146  128429 status.go:384] host is not running, skipping remaining checks
	I1014 14:13:54.857153  128429 status.go:176] multinode-436076 status: &{Name:multinode-436076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:13:54.857185  128429 status.go:174] checking status of multinode-436076-m02 ...
	I1014 14:13:54.857487  128429 cli_runner.go:164] Run: docker container inspect multinode-436076-m02 --format={{.State.Status}}
	I1014 14:13:54.882240  128429 status.go:371] multinode-436076-m02 host status = "Stopped" (err=<nil>)
	I1014 14:13:54.882259  128429 status.go:384] host is not running, skipping remaining checks
	I1014 14:13:54.882265  128429 status.go:176] multinode-436076-m02 status: &{Name:multinode-436076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436076 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436076 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.421992702s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436076 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436076
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436076-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-436076-m02 --driver=docker  --container-runtime=crio: exit status 14 (89.2157ms)

                                                
                                                
-- stdout --
	* [multinode-436076-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-436076-m02' is duplicated with machine name 'multinode-436076-m02' in profile 'multinode-436076'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436076-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436076-m03 --driver=docker  --container-runtime=crio: (32.335005027s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-436076
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-436076: exit status 80 (311.394209ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-436076 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-436076-m03 already exists in multinode-436076-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-436076-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-436076-m03: (1.893569186s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                    
x
+
TestPreload (132.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1014 14:16:31.037056    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:16:54.036181    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.018383954s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-611198 image pull gcr.io/k8s-minikube/busybox: (3.18884768s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-611198
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-611198: (5.851854974s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (28.151051657s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611198 image list
helpers_test.go:175: Cleaning up "test-preload-611198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-611198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-611198: (2.280234328s)
--- PASS: TestPreload (132.72s)

                                                
                                    
x
+
TestScheduledStopUnix (104.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-551203 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-551203 --memory=2048 --driver=docker  --container-runtime=crio: (28.323616634s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-551203 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-551203 -n scheduled-stop-551203
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-551203 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1014 14:18:09.225332    7544 retry.go:31] will retry after 68.418µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.225624    7544 retry.go:31] will retry after 130.629µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.226487    7544 retry.go:31] will retry after 311.84µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.228002    7544 retry.go:31] will retry after 501.601µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.230173    7544 retry.go:31] will retry after 730.767µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.231295    7544 retry.go:31] will retry after 745.159µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.232410    7544 retry.go:31] will retry after 937.581µs: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.233519    7544 retry.go:31] will retry after 1.87209ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.235666    7544 retry.go:31] will retry after 2.490087ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.238809    7544 retry.go:31] will retry after 2.47795ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.242023    7544 retry.go:31] will retry after 6.603698ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.249261    7544 retry.go:31] will retry after 8.180501ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.258489    7544 retry.go:31] will retry after 9.85248ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.268856    7544 retry.go:31] will retry after 25.288006ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.296808    7544 retry.go:31] will retry after 19.010727ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
I1014 14:18:09.316523    7544 retry.go:31] will retry after 46.110526ms: open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/scheduled-stop-551203/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-551203 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-551203 -n scheduled-stop-551203
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-551203
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-551203 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-551203
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-551203: exit status 7 (73.299519ms)

                                                
                                                
-- stdout --
	scheduled-stop-551203
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-551203 -n scheduled-stop-551203
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-551203 -n scheduled-stop-551203: exit status 7 (71.137906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-551203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-551203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-551203: (4.532728234s)
--- PASS: TestScheduledStopUnix (104.45s)

                                                
                                    
x
+
TestInsufficientStorage (10.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-652914 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-652914 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.778909359s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9fe8ca6e-0705-4b56-88a8-b04e72ecf180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-652914] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fce9d9a-f17a-4819-a9a3-db2b8273b795","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"d4d2a48e-8435-4611-87ac-884f03bfe62b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d48814ae-250a-4129-b70d-83a6622691fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig"}}
	{"specversion":"1.0","id":"9c1caeb9-3fcd-45cf-828c-4aa1770ccb76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube"}}
	{"specversion":"1.0","id":"fd56a325-9b41-4582-9f10-f9c051258fa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b204d6c6-0310-41c5-aae5-5faa74063c09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b12b4ec-6880-4139-a1cb-d16707138a33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5a3ee35d-040f-4476-82ee-c460da0540c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dabf78e6-b2bf-4ee5-a378-a5aefdc06f45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7236884-b319-4050-9d39-59c6d8cbc6d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9917678f-bd3f-4fb1-8d44-fa365a741662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-652914\" primary control-plane node in \"insufficient-storage-652914\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eee24a69-7793-4b08-a969-778baeb97dff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"805f3706-c284-4032-8685-99579736907c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b609444-fd4f-45f7-b8a1-7a9126278133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-652914 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-652914 --output=json --layout=cluster: exit status 7 (283.623998ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-652914","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-652914","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:19:32.851466  146099 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-652914" does not appear in /home/jenkins/minikube-integration/19790-2228/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-652914 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-652914 --output=json --layout=cluster: exit status 7 (279.225811ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-652914","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-652914","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:19:33.132311  146159 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-652914" does not appear in /home/jenkins/minikube-integration/19790-2228/kubeconfig
	E1014 14:19:33.142397  146159 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/insufficient-storage-652914/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-652914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-652914
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-652914: (1.860500602s)
--- PASS: TestInsufficientStorage (10.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3139603270 start -p running-upgrade-672271 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1014 14:24:57.112039    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3139603270 start -p running-upgrade-672271 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.392005251s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-672271 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-672271 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.558631519s)
helpers_test.go:175: Cleaning up "running-upgrade-672271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-672271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-672271: (2.886512467s)
--- PASS: TestRunningBinaryUpgrade (81.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.232148865s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-111519
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-111519: (1.913086679s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-111519 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-111519 status --format={{.Host}}: exit status 7 (88.23105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.273250107s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-111519 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (131.581395ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-111519] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-111519
	    minikube start -p kubernetes-upgrade-111519 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1115192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-111519 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1014 14:26:54.037102    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-111519 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.298826378s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-111519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-111519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-111519: (2.186311094s)
--- PASS: TestKubernetesUpgrade (381.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1200348837 start -p missing-upgrade-763627 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1200348837 start -p missing-upgrade-763627 --memory=2200 --driver=docker  --container-runtime=crio: (1m33.426174456s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-763627
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-763627: (10.405712529s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-763627
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-763627 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1014 14:21:31.036446    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:21:54.036207    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-763627 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.217359274s)
helpers_test.go:175: Cleaning up "missing-upgrade-763627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-763627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-763627: (2.354225607s)
--- PASS: TestMissingContainerUpgrade (169.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (75.220918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-120525] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-120525 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-120525 --driver=docker  --container-runtime=crio: (38.19862641s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-120525 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --driver=docker  --container-runtime=crio: (17.408409184s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-120525 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-120525 status -o json: exit status 2 (462.638002ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-120525","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-120525
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-120525: (2.156824906s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-120525 --no-kubernetes --driver=docker  --container-runtime=crio: (5.790407428s)
--- PASS: TestNoKubernetes/serial/Start (5.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-120525 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-120525 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.860779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-120525
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-120525: (1.265127748s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-120525 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-120525 --driver=docker  --container-runtime=crio: (7.488841988s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-120525 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-120525 "sudo systemctl is-active --quiet service kubelet": exit status 1 (330.135898ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1340452183 start -p stopped-upgrade-628490 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1340452183 start -p stopped-upgrade-628490 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.252127086s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1340452183 -p stopped-upgrade-628490 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1340452183 -p stopped-upgrade-628490 stop: (2.488617473s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-628490 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-628490 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m18.516476912s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-628490
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-628490: (1.233396067s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (49.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-460724 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1014 14:26:31.036872    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-460724 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.133381201s)
--- PASS: TestPause/serial/Start (49.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-460724 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-460724 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.144917146s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-460724 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-460724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-460724 --output=json --layout=cluster: exit status 2 (321.250636ms)

                                                
                                                
-- stdout --
	{"Name":"pause-460724","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-460724","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-460724 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.36s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-460724 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-460724 --alsologtostderr -v=5: (1.356570301s)
--- PASS: TestPause/serial/PauseAgain (1.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-460724 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-460724 --alsologtostderr -v=5: (2.826326803s)
--- PASS: TestPause/serial/DeletePaused (2.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-460724
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-460724: exit status 1 (15.449304ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-460724: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-519407 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-519407 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (264.556822ms)

                                                
                                                
-- stdout --
	* [false-519407] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:27:27.660981  186018 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:27:27.661202  186018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:27.661229  186018 out.go:358] Setting ErrFile to fd 2...
	I1014 14:27:27.661250  186018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:27.661556  186018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2228/.minikube/bin
	I1014 14:27:27.662046  186018 out.go:352] Setting JSON to false
	I1014 14:27:27.663022  186018 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4198,"bootTime":1728911849,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1014 14:27:27.663128  186018 start.go:139] virtualization:  
	I1014 14:27:27.666983  186018 out.go:177] * [false-519407] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 14:27:27.670084  186018 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:27:27.670210  186018 notify.go:220] Checking for updates...
	I1014 14:27:27.676046  186018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:27:27.678903  186018 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2228/kubeconfig
	I1014 14:27:27.681521  186018 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2228/.minikube
	I1014 14:27:27.684128  186018 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 14:27:27.686723  186018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:27:27.689852  186018 config.go:182] Loaded profile config "force-systemd-flag-563719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:27:27.690031  186018 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:27:27.731024  186018 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 14:27:27.731153  186018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:27:27.818320  186018 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 14:27:27.807262543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:27:27.818426  186018 docker.go:318] overlay module found
	I1014 14:27:27.827071  186018 out.go:177] * Using the docker driver based on user configuration
	I1014 14:27:27.833447  186018 start.go:297] selected driver: docker
	I1014 14:27:27.833468  186018 start.go:901] validating driver "docker" against <nil>
	I1014 14:27:27.833482  186018 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:27:27.839923  186018 out.go:201] 
	W1014 14:27:27.846640  186018 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1014 14:27:27.852607  186018 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-519407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-519407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-519407"

                                                
                                                
----------------------- debugLogs end: false-519407 [took: 4.11628029s] --------------------------------
helpers_test.go:175: Cleaning up "false-519407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-519407
--- PASS: TestNetworkPlugins/group/false (4.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-690138 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1014 14:29:34.100175    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:31:31.036905    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-690138 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m1.070541216s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-506721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-506721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (54.623947319s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-690138 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [792e866d-36cf-4ac9-8983-f40b6fee3ba5] Pending
helpers_test.go:344: "busybox" [792e866d-36cf-4ac9-8983-f40b6fee3ba5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [792e866d-36cf-4ac9-8983-f40b6fee3ba5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004927946s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-690138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-690138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-690138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.312445741s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-690138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-690138 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-690138 --alsologtostderr -v=3: (12.240463935s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-690138 -n old-k8s-version-690138
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-690138 -n old-k8s-version-690138: exit status 7 (125.230659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-690138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-690138 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-690138 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m28.04534181s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-690138 -n old-k8s-version-690138
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-506721 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b025818-e313-460a-b695-4456b00d170b] Pending
helpers_test.go:344: "busybox" [7b025818-e313-460a-b695-4456b00d170b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7b025818-e313-460a-b695-4456b00d170b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003699286s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-506721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-506721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-506721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025670929s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-506721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-506721 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-506721 --alsologtostderr -v=3: (12.596815778s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721: exit status 7 (105.119961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-506721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-506721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-506721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m49.352450571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7mhq9" [7d66e9f9-41df-4255-aac4-e784674e17e2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003596296s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7mhq9" [7d66e9f9-41df-4255-aac4-e784674e17e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004752693s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-690138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-690138 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-690138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-690138 -n old-k8s-version-690138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-690138 -n old-k8s-version-690138: exit status 2 (311.824003ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-690138 -n old-k8s-version-690138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-690138 -n old-k8s-version-690138: exit status 2 (308.555065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-690138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-690138 -n old-k8s-version-690138
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-690138 -n old-k8s-version-690138
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-865450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-865450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (55.464903457s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-865450 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aed9e3cf-f2a9-40df-bd9f-5e422c309c6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aed9e3cf-f2a9-40df-bd9f-5e422c309c6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004070166s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-865450 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-865450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-865450 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-865450 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-865450 --alsologtostderr -v=3: (11.97705136s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-865450 -n embed-certs-865450
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-865450 -n embed-certs-865450: exit status 7 (68.274151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-865450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-865450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:36:31.036869    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:54.036776    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.074690    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.081176    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.092624    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.114005    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.155513    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.237069    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.399199    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:57.720969    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:58.363085    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:36:59.645332    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:02.207654    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:07.329942    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:17.571341    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:38.052786    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-865450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m1.405995201s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-865450 -n embed-certs-865450
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h8fgk" [94ab9b7c-3560-413a-a802-94e88e7bad1f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004445462s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h8fgk" [94ab9b7c-3560-413a-a802-94e88e7bad1f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003990976s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-506721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-506721 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-506721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721: exit status 2 (324.911603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721: exit status 2 (307.738103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-506721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506721 -n default-k8s-diff-port-506721
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-769768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-769768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m0.250199255s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-769768 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b777187-2ae7-4fc7-a128-ff809bed3cb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b777187-2ae7-4fc7-a128-ff809bed3cb1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004087984s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-769768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-769768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-769768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-769768 --alsologtostderr -v=3
E1014 14:39:40.936381    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-769768 --alsologtostderr -v=3: (12.020982225s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769768 -n no-preload-769768
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769768 -n no-preload-769768: exit status 7 (80.894148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-769768 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (280.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-769768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-769768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m39.812865815s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769768 -n no-preload-769768
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (280.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wblds" [6b335d8b-70cb-45e1-88fc-002f8e24279e] Running
E1014 14:41:31.036505    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/functional-606999/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003835768s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wblds" [6b335d8b-70cb-45e1-88fc-002f8e24279e] Running
E1014 14:41:37.113610    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005210548s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-865450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-865450 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-865450 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-865450 -n embed-certs-865450
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-865450 -n embed-certs-865450: exit status 2 (325.232018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-865450 -n embed-certs-865450
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-865450 -n embed-certs-865450: exit status 2 (326.165367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-865450 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-865450 -n embed-certs-865450
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-865450 -n embed-certs-865450
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-263163 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:41:54.035846    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/addons-002422/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:41:57.074570    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-263163 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (33.178383162s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-263163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-263163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.336286715s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-263163 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-263163 --alsologtostderr -v=3: (1.312146769s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263163 -n newest-cni-263163
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263163 -n newest-cni-263163: exit status 7 (69.420113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-263163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-263163 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:42:24.777812    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/old-k8s-version-690138/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-263163 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.231396374s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263163 -n newest-cni-263163
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-263163 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-263163 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-263163 --alsologtostderr -v=1: (1.081688694s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263163 -n newest-cni-263163
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263163 -n newest-cni-263163: exit status 2 (310.203908ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263163 -n newest-cni-263163
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263163 -n newest-cni-263163: exit status 2 (326.160706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-263163 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263163 -n newest-cni-263163
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263163 -n newest-cni-263163
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1014 14:42:49.567432    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.573747    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.585086    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.606447    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.647798    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.729233    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:49.890764    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:50.212513    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:50.854533    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:52.135929    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:54.698034    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:59.819387    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:43:10.061150    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:43:30.543251    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (51.257324005s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-519407 "pgrep -a kubelet"
I1014 14:43:34.611253    7544 config.go:182] Loaded profile config "auto-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7kpxp" [a546019b-caba-4a81-885c-0e7021ab741a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7kpxp" [a546019b-caba-4a81-885c-0e7021ab741a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003297823s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1014 14:44:11.508970    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (53.423645929s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q2tcz" [6f386f21-737d-47dd-8948-0fd80adb0a80] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004160648s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q2tcz" [6f386f21-737d-47dd-8948-0fd80adb0a80] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004475264s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-769768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-769768 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-769768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769768 -n no-preload-769768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769768 -n no-preload-769768: exit status 2 (350.082296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769768 -n no-preload-769768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769768 -n no-preload-769768: exit status 2 (382.417157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-769768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769768 -n no-preload-769768
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769768 -n no-preload-769768
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.90s)
E1014 14:49:15.841980    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/auto-519407/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.428732    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.435167    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.446501    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.467972    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.509399    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.590824    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:21.752373    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:22.074600    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:22.716699    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:23.998505    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:26.560059    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:49:31.681416    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.771206762s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kdc95" [63e70cc0-3cd3-4b93-a886-10c24fae10f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004338527s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-519407 "pgrep -a kubelet"
I1014 14:45:05.123277    7544 config.go:182] Loaded profile config "kindnet-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r5kgt" [6ca55912-f4cd-4b1b-899e-c952da971374] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r5kgt" [6ca55912-f4cd-4b1b-899e-c952da971374] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003888734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.779658182s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5rltr" [a78c368d-2b47-44dc-89f2-bdc9c680536a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005407109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-519407 "pgrep -a kubelet"
I1014 14:45:56.035722    7544 config.go:182] Loaded profile config "calico-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8fz6n" [7a84df6c-ff5b-46eb-88bd-3e8335ed297a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8fz6n" [7a84df6c-ff5b-46eb-88bd-3e8335ed297a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004672897s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.835797818s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-519407 "pgrep -a kubelet"
I1014 14:46:41.563744    7544 config.go:182] Loaded profile config "custom-flannel-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9gk7s" [2b21d0ed-ef20-4c4a-9ef8-37e49ca0bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9gk7s" [2b21d0ed-ef20-4c4a-9ef8-37e49ca0bd8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005019984s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1014 14:47:49.566876    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.149281181s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-519407 "pgrep -a kubelet"
I1014 14:47:50.231562    7544 config.go:182] Loaded profile config "enable-default-cni-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9vfht" [d79800b4-3e9c-4389-a393-8e945208169e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9vfht" [d79800b4-3e9c-4389-a393-8e945208169e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003169934s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r4vqp" [af16d93f-8441-4fd0-b4df-35a0dcf232ea] Running
E1014 14:48:17.272491    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/default-k8s-diff-port-506721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007297724s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-519407 "pgrep -a kubelet"
I1014 14:48:18.270563    7544 config.go:182] Loaded profile config "flannel-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8kd77" [3c35a8bb-fe71-4bb1-82e1-255d8acffda1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8kd77" [3c35a8bb-fe71-4bb1-82e1-255d8acffda1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004652346s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-519407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.390930348s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-519407 "pgrep -a kubelet"
I1014 14:49:38.573535    7544 config.go:182] Loaded profile config "bridge-519407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-519407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ckkcj" [ea61dd88-a72d-4ef2-8573-8325f686897d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 14:49:41.923092    7544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2228/.minikube/profiles/no-preload-769768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-ckkcj" [ea61dd88-a72d-4ef2-8573-8325f686897d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003976603s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-519407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-519407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-849591 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-849591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-849591
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-002422 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-576627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-576627
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-519407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-519407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-519407"

                                                
                                                
----------------------- debugLogs end: kubenet-519407 [took: 4.572772216s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-519407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-519407
--- SKIP: TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-519407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-519407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-519407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-519407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-519407"

                                                
                                                
----------------------- debugLogs end: cilium-519407 [took: 4.671327066s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-519407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-519407
--- SKIP: TestNetworkPlugins/group/cilium (4.93s)

                                                
                                    
Copied to clipboard