Test Report: Docker_Linux_crio_arm64 19749

                    
                      50b5d8ee62174b462904730e907edeaa222f14db:2024-10-11:36607
                    
                

Test fail (2/329)

Order failed test Duration
35 TestAddons/parallel/Ingress 150.99
37 TestAddons/parallel/MetricsServer 343.93
x
+
TestAddons/parallel/Ingress (150.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-627736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-627736 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-627736 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8eab6550-1877-4b0a-b87f-da1501f040d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8eab6550-1877-4b0a-b87f-da1501f040d0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003755356s
I1011 21:03:47.189449  282920 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-627736 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.588407417s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-627736 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-627736
helpers_test.go:235: (dbg) docker inspect addons-627736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85",
	        "Created": "2024-10-11T20:58:28.114718101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-11T20:58:28.238353139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/hosts",
	        "LogPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85-json.log",
	        "Name": "/addons-627736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-627736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-627736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df-init/diff:/var/lib/docker/overlay2/71b5c158b789443874429d56b0e70559f5769113100aad8f0c3428abb77f0cef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-627736",
	                "Source": "/var/lib/docker/volumes/addons-627736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-627736",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-627736",
	                "name.minikube.sigs.k8s.io": "addons-627736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6af8f0de907e2156042f51459dce13bdc8e944c37437e28fc613a89c8b8683e8",
	            "SandboxKey": "/var/run/docker/netns/6af8f0de907e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-627736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0344146e4445be92b5ffb06059262a7e24bfaf0cf3d149aa52e9622f8b2646a5",
	                    "EndpointID": "c547f1ac6145b2c2ddf3eeac89f1d8ea66e5187cb9598733847587a3a08da57d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-627736",
	                        "9cbb45944b0e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-627736 -n addons-627736
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 logs -n 25: (1.595717357s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-550167                                                                     | download-only-550167   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-455194                                                                     | download-only-455194   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                                                                          | download-docker-358295 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | download-docker-358295                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-358295                                                                   | download-docker-358295 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-919124   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | binary-mirror-919124                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45157                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919124                                                                     | binary-mirror-919124   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-627736                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-627736                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-627736 --wait=true                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 21:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:01 UTC | 11 Oct 24 21:01 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:01 UTC | 11 Oct 24 21:01 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-627736 ip                                                                            | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | -p addons-627736                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-627736 ssh cat                                                                       | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-1c41c8d6-e192-4aab-96f5-793834495bbd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-627736 ssh curl -s                                                                   | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-627736 ip                                                                            | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:03.880755  283686 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:03.880963  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:03.880977  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:03.880984  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:03.881378  283686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 20:58:03.881938  283686 out.go:352] Setting JSON to false
	I1011 20:58:03.883321  283686 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9627,"bootTime":1728670657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 20:58:03.883421  283686 start.go:139] virtualization:  
	I1011 20:58:03.885288  283686 out.go:177] * [addons-627736] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 20:58:03.887096  283686 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 20:58:03.887220  283686 notify.go:220] Checking for updates...
	I1011 20:58:03.889732  283686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:03.891136  283686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:58:03.892576  283686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 20:58:03.894143  283686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 20:58:03.895316  283686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 20:58:03.896729  283686 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:03.917109  283686 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:58:03.917241  283686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:03.980042  283686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:03.970919229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:03.980156  283686 docker.go:318] overlay module found
	I1011 20:58:03.982242  283686 out.go:177] * Using the docker driver based on user configuration
	I1011 20:58:03.983375  283686 start.go:297] selected driver: docker
	I1011 20:58:03.983391  283686 start.go:901] validating driver "docker" against <nil>
	I1011 20:58:03.983405  283686 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 20:58:03.984042  283686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:04.031998  283686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:04.022413748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:04.032222  283686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:04.032451  283686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:58:04.034037  283686 out.go:177] * Using Docker driver with root privileges
	I1011 20:58:04.035534  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:04.035598  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:04.035616  283686 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:04.035695  283686 start.go:340] cluster config:
	{Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:04.037360  283686 out.go:177] * Starting "addons-627736" primary control-plane node in "addons-627736" cluster
	I1011 20:58:04.038669  283686 cache.go:121] Beginning downloading kic base image for docker with crio
	I1011 20:58:04.040244  283686 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:58:04.041386  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:04.041433  283686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1011 20:58:04.041456  283686 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:04.041478  283686 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:58:04.041540  283686 preload.go:172] Found /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1011 20:58:04.041550  283686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 20:58:04.041901  283686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json ...
	I1011 20:58:04.041921  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json: {Name:mkb65e81161297914bc823260d8d954cd6c3cfff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:04.056055  283686 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:58:04.056170  283686 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:58:04.056201  283686 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1011 20:58:04.056209  283686 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1011 20:58:04.056217  283686 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1011 20:58:04.056223  283686 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1011 20:58:21.249752  283686 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1011 20:58:21.249794  283686 cache.go:194] Successfully downloaded all kic artifacts
	I1011 20:58:21.249839  283686 start.go:360] acquireMachinesLock for addons-627736: {Name:mkf3c6eb944bfebe208beb6538a765296fcc1455 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:21.249966  283686 start.go:364] duration metric: took 103.004µs to acquireMachinesLock for "addons-627736"
	I1011 20:58:21.250006  283686 start.go:93] Provisioning new machine with config: &{Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:21.250082  283686 start.go:125] createHost starting for "" (driver="docker")
	I1011 20:58:21.251907  283686 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1011 20:58:21.252146  283686 start.go:159] libmachine.API.Create for "addons-627736" (driver="docker")
	I1011 20:58:21.252179  283686 client.go:168] LocalClient.Create starting
	I1011 20:58:21.252282  283686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem
	I1011 20:58:21.623420  283686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem
	I1011 20:58:21.831240  283686 cli_runner.go:164] Run: docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 20:58:21.846209  283686 cli_runner.go:211] docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 20:58:21.846310  283686 network_create.go:284] running [docker network inspect addons-627736] to gather additional debugging logs...
	I1011 20:58:21.846333  283686 cli_runner.go:164] Run: docker network inspect addons-627736
	W1011 20:58:21.861211  283686 cli_runner.go:211] docker network inspect addons-627736 returned with exit code 1
	I1011 20:58:21.861244  283686 network_create.go:287] error running [docker network inspect addons-627736]: docker network inspect addons-627736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-627736 not found
	I1011 20:58:21.861258  283686 network_create.go:289] output of [docker network inspect addons-627736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-627736 not found
	
	** /stderr **
	I1011 20:58:21.861381  283686 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:21.876600  283686 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c75e40}
	I1011 20:58:21.876644  283686 network_create.go:124] attempt to create docker network addons-627736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1011 20:58:21.876707  283686 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-627736 addons-627736
	I1011 20:58:21.951616  283686 network_create.go:108] docker network addons-627736 192.168.49.0/24 created
	I1011 20:58:21.951650  283686 kic.go:121] calculated static IP "192.168.49.2" for the "addons-627736" container
	I1011 20:58:21.951736  283686 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 20:58:21.966398  283686 cli_runner.go:164] Run: docker volume create addons-627736 --label name.minikube.sigs.k8s.io=addons-627736 --label created_by.minikube.sigs.k8s.io=true
	I1011 20:58:21.981587  283686 oci.go:103] Successfully created a docker volume addons-627736
	I1011 20:58:21.981687  283686 cli_runner.go:164] Run: docker run --rm --name addons-627736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --entrypoint /usr/bin/test -v addons-627736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1011 20:58:23.992207  283686 cli_runner.go:217] Completed: docker run --rm --name addons-627736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --entrypoint /usr/bin/test -v addons-627736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.010477722s)
	I1011 20:58:23.992237  283686 oci.go:107] Successfully prepared a docker volume addons-627736
	I1011 20:58:23.992264  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:23.992283  283686 kic.go:194] Starting extracting preloaded images to volume ...
	I1011 20:58:23.992349  283686 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-627736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 20:58:28.046362  283686 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-627736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.053952635s)
	I1011 20:58:28.046399  283686 kic.go:203] duration metric: took 4.054111147s to extract preloaded images to volume ...
	W1011 20:58:28.046535  283686 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1011 20:58:28.046640  283686 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 20:58:28.100545  283686 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-627736 --name addons-627736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-627736 --network addons-627736 --ip 192.168.49.2 --volume addons-627736:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1011 20:58:28.401077  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Running}}
	I1011 20:58:28.424804  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:28.448134  283686 cli_runner.go:164] Run: docker exec addons-627736 stat /var/lib/dpkg/alternatives/iptables
	I1011 20:58:28.509241  283686 oci.go:144] the created container "addons-627736" has a running status.
	I1011 20:58:28.509332  283686 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa...
	I1011 20:58:29.289444  283686 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 20:58:29.325985  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:29.345696  283686 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 20:58:29.345722  283686 kic_runner.go:114] Args: [docker exec --privileged addons-627736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 20:58:29.429963  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:29.449794  283686 machine.go:93] provisionDockerMachine start ...
	I1011 20:58:29.449886  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.472739  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.473009  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.473019  283686 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 20:58:29.602251  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-627736
	
	I1011 20:58:29.602291  283686 ubuntu.go:169] provisioning hostname "addons-627736"
	I1011 20:58:29.602359  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.622090  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.622336  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.622353  283686 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-627736 && echo "addons-627736" | sudo tee /etc/hostname
	I1011 20:58:29.762000  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-627736
	
	I1011 20:58:29.762078  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.779890  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.780134  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.780156  283686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-627736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-627736/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-627736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 20:58:29.906682  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:58:29.906709  283686 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19749-277533/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-277533/.minikube}
	I1011 20:58:29.906741  283686 ubuntu.go:177] setting up certificates
	I1011 20:58:29.906753  283686 provision.go:84] configureAuth start
	I1011 20:58:29.906822  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:29.923206  283686 provision.go:143] copyHostCerts
	I1011 20:58:29.923294  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/ca.pem (1078 bytes)
	I1011 20:58:29.923429  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/cert.pem (1123 bytes)
	I1011 20:58:29.923489  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/key.pem (1679 bytes)
	I1011 20:58:29.923575  283686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem org=jenkins.addons-627736 san=[127.0.0.1 192.168.49.2 addons-627736 localhost minikube]
	I1011 20:58:30.229960  283686 provision.go:177] copyRemoteCerts
	I1011 20:58:30.230035  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 20:58:30.230086  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.246690  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.339941  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 20:58:30.364285  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 20:58:30.389324  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 20:58:30.413946  283686 provision.go:87] duration metric: took 507.17495ms to configureAuth
	I1011 20:58:30.414016  283686 ubuntu.go:193] setting minikube options for container-runtime
	I1011 20:58:30.414235  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:30.414357  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.430663  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:30.431005  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:30.431032  283686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 20:58:30.658332  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 20:58:30.658406  283686 machine.go:96] duration metric: took 1.208590933s to provisionDockerMachine
	I1011 20:58:30.658432  283686 client.go:171] duration metric: took 9.406242299s to LocalClient.Create
	I1011 20:58:30.658480  283686 start.go:167] duration metric: took 9.406334211s to libmachine.API.Create "addons-627736"
	I1011 20:58:30.658506  283686 start.go:293] postStartSetup for "addons-627736" (driver="docker")
	I1011 20:58:30.658534  283686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 20:58:30.658686  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 20:58:30.658793  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.676041  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.767963  283686 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 20:58:30.771357  283686 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 20:58:30.771394  283686 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 20:58:30.771406  283686 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 20:58:30.771413  283686 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1011 20:58:30.771425  283686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-277533/.minikube/addons for local assets ...
	I1011 20:58:30.771501  283686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-277533/.minikube/files for local assets ...
	I1011 20:58:30.771529  283686 start.go:296] duration metric: took 113.001425ms for postStartSetup
	I1011 20:58:30.771866  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:30.787918  283686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json ...
	I1011 20:58:30.788202  283686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 20:58:30.788255  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.804194  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.895929  283686 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 20:58:30.900658  283686 start.go:128] duration metric: took 9.650559191s to createHost
	I1011 20:58:30.900686  283686 start.go:83] releasing machines lock for "addons-627736", held for 9.65070774s
	I1011 20:58:30.900772  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:30.916556  283686 ssh_runner.go:195] Run: cat /version.json
	I1011 20:58:30.916586  283686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 20:58:30.916611  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.916664  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.936317  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.945253  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:31.156867  283686 ssh_runner.go:195] Run: systemctl --version
	I1011 20:58:31.161186  283686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 20:58:31.305800  283686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 20:58:31.310367  283686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:58:31.331473  283686 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1011 20:58:31.331562  283686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:58:31.363513  283686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1011 20:58:31.363536  283686 start.go:495] detecting cgroup driver to use...
	I1011 20:58:31.363600  283686 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1011 20:58:31.363666  283686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 20:58:31.379890  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 20:58:31.391045  283686 docker.go:217] disabling cri-docker service (if available) ...
	I1011 20:58:31.391120  283686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 20:58:31.405838  283686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 20:58:31.420382  283686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 20:58:31.511648  283686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 20:58:31.612435  283686 docker.go:233] disabling docker service ...
	I1011 20:58:31.612560  283686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 20:58:31.633654  283686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 20:58:31.646033  283686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 20:58:31.738736  283686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 20:58:31.833045  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 20:58:31.844054  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 20:58:31.860719  283686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 20:58:31.860797  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.871060  283686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 20:58:31.871142  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.881916  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.891430  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.901069  283686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 20:58:31.910112  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.919683  283686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.935721  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.945571  283686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 20:58:31.954207  283686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 20:58:31.962756  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:32.046338  283686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 20:58:32.158355  283686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 20:58:32.158451  283686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 20:58:32.162188  283686 start.go:563] Will wait 60s for crictl version
	I1011 20:58:32.162252  283686 ssh_runner.go:195] Run: which crictl
	I1011 20:58:32.165699  283686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 20:58:32.206326  283686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1011 20:58:32.206430  283686 ssh_runner.go:195] Run: crio --version
	I1011 20:58:32.243108  283686 ssh_runner.go:195] Run: crio --version
	I1011 20:58:32.283236  283686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1011 20:58:32.284599  283686 cli_runner.go:164] Run: docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:32.298937  283686 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1011 20:58:32.302470  283686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:32.313319  283686 kubeadm.go:883] updating cluster {Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 20:58:32.313444  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:32.313501  283686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:32.384346  283686 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:58:32.384370  283686 crio.go:433] Images already preloaded, skipping extraction
	I1011 20:58:32.384428  283686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:32.420745  283686 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:58:32.420771  283686 cache_images.go:84] Images are preloaded, skipping loading
	I1011 20:58:32.420779  283686 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1011 20:58:32.420868  283686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-627736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 20:58:32.420951  283686 ssh_runner.go:195] Run: crio config
	I1011 20:58:32.471284  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:32.471307  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:32.471319  283686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 20:58:32.471344  283686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-627736 NodeName:addons-627736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 20:58:32.471491  283686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-627736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 20:58:32.471563  283686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 20:58:32.480367  283686 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 20:58:32.480467  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 20:58:32.489215  283686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1011 20:58:32.507493  283686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 20:58:32.525609  283686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1011 20:58:32.543454  283686 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1011 20:58:32.546616  283686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:32.557675  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:32.642560  283686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:32.655924  283686 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736 for IP: 192.168.49.2
	I1011 20:58:32.655962  283686 certs.go:194] generating shared ca certs ...
	I1011 20:58:32.655978  283686 certs.go:226] acquiring lock for ca certs: {Name:mk54de457899109c47c9262eb70cea93f226fb7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:32.656695  283686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key
	I1011 20:58:33.120366  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt ...
	I1011 20:58:33.120397  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt: {Name:mk35e22facab7399875c11316c5e90e2812fb42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.120600  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key ...
	I1011 20:58:33.120616  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key: {Name:mk3f7e21b09c48a1e47b9012985e77cb50d8340c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.120731  283686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key
	I1011 20:58:33.772414  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt ...
	I1011 20:58:33.772446  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt: {Name:mk307405633594918b57a6584f1a74b6db576163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.772644  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key ...
	I1011 20:58:33.772657  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key: {Name:mka331898c20bab8f0b0cc436658a676570f7a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.773198  283686 certs.go:256] generating profile certs ...
	I1011 20:58:33.773287  283686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key
	I1011 20:58:33.773305  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt with IP's: []
	I1011 20:58:34.276641  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt ...
	I1011 20:58:34.276679  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: {Name:mk06faad2ead76e76fe953049fcc04a05cd3d303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.276875  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key ...
	I1011 20:58:34.276888  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key: {Name:mkb296892a42f0228b7f0f5199473b64a3b763a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.276971  283686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca
	I1011 20:58:34.276992  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1011 20:58:34.975206  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca ...
	I1011 20:58:34.975239  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca: {Name:mk3f1afa3d6f256a3919fc5dd2e40459f4a45811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.975428  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca ...
	I1011 20:58:34.975443  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca: {Name:mkfe5252482039790baf8249a5dccdaf06a315d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.975922  283686 certs.go:381] copying /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca -> /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt
	I1011 20:58:34.976013  283686 certs.go:385] copying /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca -> /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key
	I1011 20:58:34.976067  283686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key
	I1011 20:58:34.976091  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt with IP's: []
	I1011 20:58:35.199679  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt ...
	I1011 20:58:35.199709  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt: {Name:mk98e1ffdd01236d0fe4f5851e298fed70995f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.200249  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key ...
	I1011 20:58:35.200266  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key: {Name:mk2bc2d21b88b1aacf8b6f48b230ed56733a4ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.200462  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 20:58:35.200514  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem (1078 bytes)
	I1011 20:58:35.200544  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem (1123 bytes)
	I1011 20:58:35.200573  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem (1679 bytes)
	I1011 20:58:35.201205  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 20:58:35.226764  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1011 20:58:35.252289  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 20:58:35.277250  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 20:58:35.301724  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1011 20:58:35.325107  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 20:58:35.348852  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 20:58:35.372585  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 20:58:35.395939  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 20:58:35.419420  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 20:58:35.437244  283686 ssh_runner.go:195] Run: openssl version
	I1011 20:58:35.442717  283686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 20:58:35.452447  283686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.456023  283686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.456137  283686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.463031  283686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 20:58:35.472473  283686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 20:58:35.475697  283686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 20:58:35.475749  283686 kubeadm.go:392] StartCluster: {Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:35.475828  283686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 20:58:35.475885  283686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 20:58:35.521690  283686 cri.go:89] found id: ""
	I1011 20:58:35.521762  283686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 20:58:35.531125  283686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 20:58:35.539897  283686 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1011 20:58:35.540012  283686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 20:58:35.549071  283686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 20:58:35.549092  283686 kubeadm.go:157] found existing configuration files:
	
	I1011 20:58:35.549167  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 20:58:35.557942  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 20:58:35.558060  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 20:58:35.566689  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 20:58:35.575289  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 20:58:35.575354  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 20:58:35.584554  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 20:58:35.594387  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 20:58:35.594452  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 20:58:35.605806  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 20:58:35.615484  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 20:58:35.615549  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 20:58:35.624972  283686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 20:58:35.673707  283686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 20:58:35.673767  283686 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 20:58:35.693219  283686 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1011 20:58:35.693380  283686 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1011 20:58:35.693454  283686 kubeadm.go:310] OS: Linux
	I1011 20:58:35.693536  283686 kubeadm.go:310] CGROUPS_CPU: enabled
	I1011 20:58:35.693621  283686 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1011 20:58:35.693715  283686 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1011 20:58:35.693784  283686 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1011 20:58:35.693836  283686 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1011 20:58:35.693889  283686 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1011 20:58:35.693938  283686 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1011 20:58:35.693989  283686 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1011 20:58:35.694040  283686 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1011 20:58:35.754926  283686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 20:58:35.755054  283686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 20:58:35.755148  283686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 20:58:35.761410  283686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 20:58:35.764002  283686 out.go:235]   - Generating certificates and keys ...
	I1011 20:58:35.764108  283686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 20:58:35.764179  283686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 20:58:36.070218  283686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 20:58:36.922122  283686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 20:58:37.433341  283686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 20:58:37.653320  283686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 20:58:37.823798  283686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 20:58:37.824189  283686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-627736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:38.312503  283686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 20:58:38.312871  283686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-627736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:38.676067  283686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 20:58:38.842380  283686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 20:58:39.530467  283686 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 20:58:39.530926  283686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 20:58:40.148868  283686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 20:58:40.886377  283686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 20:58:41.295098  283686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 20:58:41.734649  283686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 20:58:42.364380  283686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 20:58:42.365488  283686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 20:58:42.368899  283686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 20:58:42.371042  283686 out.go:235]   - Booting up control plane ...
	I1011 20:58:42.371145  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 20:58:42.371221  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 20:58:42.387437  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 20:58:42.402593  283686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 20:58:42.408517  283686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 20:58:42.408575  283686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 20:58:42.493471  283686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 20:58:42.493591  283686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 20:58:43.995160  283686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501702844s
	I1011 20:58:43.995252  283686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 20:58:49.498441  283686 kubeadm.go:310] [api-check] The API server is healthy after 5.503335569s
	I1011 20:58:49.524063  283686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 20:58:49.538529  283686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 20:58:49.565479  283686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 20:58:49.565678  283686 kubeadm.go:310] [mark-control-plane] Marking the node addons-627736 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 20:58:49.577116  283686 kubeadm.go:310] [bootstrap-token] Using token: t2uypf.gy0wdc6zxqr3x4o7
	I1011 20:58:49.579785  283686 out.go:235]   - Configuring RBAC rules ...
	I1011 20:58:49.579915  283686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 20:58:49.584510  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 20:58:49.595062  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 20:58:49.599221  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 20:58:49.603318  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 20:58:49.607650  283686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 20:58:49.905742  283686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 20:58:50.374620  283686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 20:58:50.905373  283686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 20:58:50.906627  283686 kubeadm.go:310] 
	I1011 20:58:50.906702  283686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 20:58:50.906712  283686 kubeadm.go:310] 
	I1011 20:58:50.906788  283686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 20:58:50.906798  283686 kubeadm.go:310] 
	I1011 20:58:50.906824  283686 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 20:58:50.906903  283686 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 20:58:50.906958  283686 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 20:58:50.906971  283686 kubeadm.go:310] 
	I1011 20:58:50.907025  283686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 20:58:50.907033  283686 kubeadm.go:310] 
	I1011 20:58:50.907080  283686 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 20:58:50.907089  283686 kubeadm.go:310] 
	I1011 20:58:50.907141  283686 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 20:58:50.907218  283686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 20:58:50.907290  283686 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 20:58:50.907296  283686 kubeadm.go:310] 
	I1011 20:58:50.907380  283686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 20:58:50.907458  283686 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 20:58:50.907468  283686 kubeadm.go:310] 
	I1011 20:58:50.907550  283686 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t2uypf.gy0wdc6zxqr3x4o7 \
	I1011 20:58:50.907656  283686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3ad57be593f5ef8d7070016b8fd5a352b0a6c8ca865fb469493e29f8ed14cb \
	I1011 20:58:50.907680  283686 kubeadm.go:310] 	--control-plane 
	I1011 20:58:50.907688  283686 kubeadm.go:310] 
	I1011 20:58:50.907771  283686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 20:58:50.907780  283686 kubeadm.go:310] 
	I1011 20:58:50.907861  283686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t2uypf.gy0wdc6zxqr3x4o7 \
	I1011 20:58:50.907965  283686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3ad57be593f5ef8d7070016b8fd5a352b0a6c8ca865fb469493e29f8ed14cb 
	I1011 20:58:50.912416  283686 kubeadm.go:310] W1011 20:58:35.668720    1186 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:50.912725  283686 kubeadm.go:310] W1011 20:58:35.669516    1186 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:50.912941  283686 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1011 20:58:50.913048  283686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 20:58:50.913068  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:50.913076  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:50.916049  283686 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 20:58:50.918761  283686 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 20:58:50.922580  283686 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 20:58:50.922641  283686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 20:58:50.942107  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 20:58:51.221260  283686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 20:58:51.221478  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-627736 minikube.k8s.io/updated_at=2024_10_11T20_58_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=addons-627736 minikube.k8s.io/primary=true
	I1011 20:58:51.221400  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:51.409948  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:51.410005  283686 ops.go:34] apiserver oom_adj: -16
	I1011 20:58:51.910236  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:52.410534  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:52.910770  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:53.410898  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:53.910062  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.410050  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.910861  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.410967  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.524820  283686 kubeadm.go:1113] duration metric: took 4.303477905s to wait for elevateKubeSystemPrivileges
	I1011 20:58:55.524856  283686 kubeadm.go:394] duration metric: took 20.049111377s to StartCluster
	I1011 20:58:55.524876  283686 settings.go:142] acquiring lock: {Name:mkd159174089de36fda894bd942ff4e38ae67976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:55.525008  283686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:58:55.525386  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/kubeconfig: {Name:mk2d78d1d8080a1deb25ffe9f98ce4dff6104211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:55.525591  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 20:58:55.525605  283686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:55.525840  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:55.525870  283686 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1011 20:58:55.525946  283686 addons.go:69] Setting yakd=true in profile "addons-627736"
	I1011 20:58:55.525965  283686 addons.go:234] Setting addon yakd=true in "addons-627736"
	I1011 20:58:55.525988  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.526454  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.526784  283686 addons.go:69] Setting inspektor-gadget=true in profile "addons-627736"
	I1011 20:58:55.526803  283686 addons.go:234] Setting addon inspektor-gadget=true in "addons-627736"
	I1011 20:58:55.526827  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.527307  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.527846  283686 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-627736"
	I1011 20:58:55.527869  283686 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-627736"
	I1011 20:58:55.527894  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.528290  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.530487  283686 addons.go:69] Setting cloud-spanner=true in profile "addons-627736"
	I1011 20:58:55.530524  283686 addons.go:234] Setting addon cloud-spanner=true in "addons-627736"
	I1011 20:58:55.530564  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.532022  283686 addons.go:69] Setting metrics-server=true in profile "addons-627736"
	I1011 20:58:55.532087  283686 addons.go:234] Setting addon metrics-server=true in "addons-627736"
	I1011 20:58:55.532138  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.532643  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.533592  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.539366  283686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-627736"
	I1011 20:58:55.539413  283686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-627736"
	I1011 20:58:55.539450  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.539927  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541084  283686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-627736"
	I1011 20:58:55.541175  283686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-627736"
	I1011 20:58:55.580788  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.581304  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541324  283686 addons.go:69] Setting default-storageclass=true in profile "addons-627736"
	I1011 20:58:55.603728  283686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-627736"
	I1011 20:58:55.604193  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.604462  283686 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1011 20:58:55.541337  283686 addons.go:69] Setting gcp-auth=true in profile "addons-627736"
	I1011 20:58:55.618399  283686 mustload.go:65] Loading cluster: addons-627736
	I1011 20:58:55.618686  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:55.619099  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541341  283686 addons.go:69] Setting ingress=true in profile "addons-627736"
	I1011 20:58:55.630399  283686 addons.go:234] Setting addon ingress=true in "addons-627736"
	I1011 20:58:55.630463  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.630972  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.632197  283686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1011 20:58:55.632257  283686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1011 20:58:55.632346  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.541345  283686 addons.go:69] Setting ingress-dns=true in profile "addons-627736"
	I1011 20:58:55.561549  283686 addons.go:69] Setting registry=true in profile "addons-627736"
	I1011 20:58:55.636286  283686 addons.go:234] Setting addon registry=true in "addons-627736"
	I1011 20:58:55.636356  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561564  283686 addons.go:69] Setting storage-provisioner=true in profile "addons-627736"
	I1011 20:58:55.637220  283686 addons.go:234] Setting addon storage-provisioner=true in "addons-627736"
	I1011 20:58:55.637247  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561573  283686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-627736"
	I1011 20:58:55.647937  283686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-627736"
	I1011 20:58:55.561577  283686 addons.go:69] Setting volcano=true in profile "addons-627736"
	I1011 20:58:55.648277  283686 addons.go:234] Setting addon volcano=true in "addons-627736"
	I1011 20:58:55.648308  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561581  283686 addons.go:69] Setting volumesnapshots=true in profile "addons-627736"
	I1011 20:58:55.648411  283686 addons.go:234] Setting addon volumesnapshots=true in "addons-627736"
	I1011 20:58:55.648436  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.563901  283686 out.go:177] * Verifying Kubernetes components...
	I1011 20:58:55.648585  283686 addons.go:234] Setting addon ingress-dns=true in "addons-627736"
	I1011 20:58:55.648635  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.649091  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.662560  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.688730  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.694543  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.710283  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.727069  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.729985  283686 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1011 20:58:55.732621  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1011 20:58:55.732655  283686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1011 20:58:55.732725  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.746271  283686 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1011 20:58:55.746713  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:55.779014  283686 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1011 20:58:55.781665  283686 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:55.781688  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1011 20:58:55.781753  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.783964  283686 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1011 20:58:55.786510  283686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:55.786611  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1011 20:58:55.786733  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.793911  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 20:58:55.800444  283686 addons.go:234] Setting addon default-storageclass=true in "addons-627736"
	I1011 20:58:55.802612  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.803116  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.808163  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.809966  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 20:58:55.809985  283686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 20:58:55.810036  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.815491  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1011 20:58:55.818196  283686 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1011 20:58:55.826636  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:55.834666  283686 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1011 20:58:55.837203  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1011 20:58:55.837521  283686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:55.837541  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1011 20:58:55.837610  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.854671  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1011 20:58:55.866803  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1011 20:58:55.869718  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1011 20:58:55.879108  283686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:55.879135  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1011 20:58:55.879205  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.901399  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:55.901691  283686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-627736"
	I1011 20:58:55.901731  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.902180  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.911845  283686 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1011 20:58:55.914561  283686 out.go:177]   - Using image docker.io/registry:2.8.3
	I1011 20:58:55.917191  283686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1011 20:58:55.917265  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1011 20:58:55.917361  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.920229  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:55.930770  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1011 20:58:55.932628  283686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1011 20:58:55.933060  283686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:55.933104  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1011 20:58:55.933199  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.958384  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1011 20:58:55.961466  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1011 20:58:55.961610  283686 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 20:58:55.964459  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1011 20:58:55.968479  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1011 20:58:55.968506  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1011 20:58:55.968592  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.968884  283686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:55.968915  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 20:58:55.968972  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.005609  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.006537  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.008685  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.012733  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1011 20:58:56.018921  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1011 20:58:56.018946  283686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1011 20:58:56.019026  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.053320  283686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:56.055560  283686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 20:58:56.055716  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.070370  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.076337  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.114977  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.128630  283686 out.go:177]   - Using image docker.io/busybox:stable
	I1011 20:58:56.131646  283686 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1011 20:58:56.134643  283686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:56.134669  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1011 20:58:56.134917  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.138500  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.145032  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.145895  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.155678  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.181522  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.189512  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.196431  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.368404  283686 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:56.368428  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1011 20:58:56.472320  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:56.518938  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:56.531491  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1011 20:58:56.531516  283686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1011 20:58:56.563021  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:56.605629  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:56.608502  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1011 20:58:56.608529  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1011 20:58:56.625478  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1011 20:58:56.625504  283686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1011 20:58:56.649181  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:56.673072  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:56.725528  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1011 20:58:56.725555  283686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1011 20:58:56.728468  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:56.739779  283686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:56.768242  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1011 20:58:56.768270  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1011 20:58:56.783014  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 20:58:56.783047  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1011 20:58:56.831898  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:56.833511  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:56.841309  283686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1011 20:58:56.841335  283686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1011 20:58:56.846517  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1011 20:58:56.846545  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1011 20:58:56.856959  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:58:56.856980  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1011 20:58:56.997143  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1011 20:58:56.997175  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1011 20:58:57.024146  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1011 20:58:57.024175  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1011 20:58:57.028404  283686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:58:57.028438  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1011 20:58:57.029189  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:58:57.045531  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 20:58:57.045557  283686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 20:58:57.167999  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1011 20:58:57.168037  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1011 20:58:57.193463  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:58:57.259558  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:58:57.259624  283686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 20:58:57.270481  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1011 20:58:57.270555  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1011 20:58:57.382683  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1011 20:58:57.382760  283686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1011 20:58:57.489884  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:58:57.493222  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1011 20:58:57.493249  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1011 20:58:57.593348  283686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:58:57.593374  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1011 20:58:57.629812  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1011 20:58:57.629887  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1011 20:58:57.721382  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1011 20:58:57.721458  283686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1011 20:58:57.730696  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:58:57.814442  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1011 20:58:57.814517  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1011 20:58:57.916496  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1011 20:58:57.916578  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1011 20:58:58.041706  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:58:58.041783  283686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1011 20:58:58.078742  283686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.284786574s)
	I1011 20:58:58.078822  283686 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1011 20:58:58.245585  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:58:59.490837  283686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-627736" context rescaled to 1 replicas
	I1011 20:59:01.917891  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.445530604s)
	I1011 20:59:01.917957  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.398994318s)
	I1011 20:59:01.917984  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.354939823s)
	I1011 20:59:01.918031  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.312380424s)
	I1011 20:59:02.669194  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.019973991s)
	I1011 20:59:02.669230  283686 addons.go:475] Verifying addon ingress=true in "addons-627736"
	I1011 20:59:02.669426  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.996326459s)
	I1011 20:59:02.669490  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.941000968s)
	I1011 20:59:02.669666  283686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.929864181s)
	I1011 20:59:02.670563  283686 node_ready.go:35] waiting up to 6m0s for node "addons-627736" to be "Ready" ...
	I1011 20:59:02.670755  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.838831772s)
	I1011 20:59:02.670794  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.83725768s)
	I1011 20:59:02.670931  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.641710107s)
	I1011 20:59:02.671218  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.477716719s)
	I1011 20:59:02.671240  283686 addons.go:475] Verifying addon registry=true in "addons-627736"
	I1011 20:59:02.671349  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.181398005s)
	I1011 20:59:02.671836  283686 addons.go:475] Verifying addon metrics-server=true in "addons-627736"
	I1011 20:59:02.671431  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.940660017s)
	W1011 20:59:02.671877  283686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:02.671908  283686 retry.go:31] will retry after 280.043704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:02.673387  283686 out.go:177] * Verifying ingress addon...
	I1011 20:59:02.675402  283686 out.go:177] * Verifying registry addon...
	I1011 20:59:02.675440  283686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-627736 service yakd-dashboard -n yakd-dashboard
	
	I1011 20:59:02.678009  283686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1011 20:59:02.680837  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1011 20:59:02.696586  283686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1011 20:59:02.696622  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:02.697137  283686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:02.697156  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1011 20:59:02.712906  283686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1011 20:59:02.952373  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:02.982563  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.736879831s)
	I1011 20:59:02.982641  283686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-627736"
	I1011 20:59:02.985523  283686 out.go:177] * Verifying csi-hostpath-driver addon...
	I1011 20:59:02.988978  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1011 20:59:03.035520  283686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:03.035601  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:03.183194  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:03.186324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:03.493646  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:03.682822  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:03.684905  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:03.738021  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1011 20:59:03.738117  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:59:03.757442  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:59:03.861745  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1011 20:59:03.880123  283686 addons.go:234] Setting addon gcp-auth=true in "addons-627736"
	I1011 20:59:03.880174  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:59:03.880645  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:59:03.896454  283686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1011 20:59:03.896511  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:59:03.912993  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:59:03.992803  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:04.181892  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:04.184489  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:04.492656  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:04.673923  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:04.682577  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:04.685290  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:04.992639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:05.181990  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:05.183816  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:05.493373  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:05.635806  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.683361623s)
	I1011 20:59:05.635908  283686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.739429592s)
	I1011 20:59:05.639136  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:05.641789  283686 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1011 20:59:05.644603  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1011 20:59:05.644626  283686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1011 20:59:05.669566  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1011 20:59:05.669640  283686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1011 20:59:05.686944  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:05.688228  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:05.689247  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:05.689294  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1011 20:59:05.708250  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:05.993489  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:06.198255  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:06.199287  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:06.231718  283686 addons.go:475] Verifying addon gcp-auth=true in "addons-627736"
	I1011 20:59:06.234518  283686 out.go:177] * Verifying gcp-auth addon...
	I1011 20:59:06.238068  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1011 20:59:06.292136  283686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 20:59:06.292163  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:06.493163  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:06.674337  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:06.683778  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:06.684688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:06.741535  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:06.993133  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:07.182285  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:07.183661  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:07.241859  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:07.493278  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:07.681967  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:07.683503  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:07.741108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:07.992617  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:08.182085  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:08.183519  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:08.241707  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:08.493072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:08.682542  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:08.684149  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:08.741313  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:08.992849  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:09.174731  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:09.181814  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:09.184266  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.241992  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:09.493320  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:09.681883  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:09.684325  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.742874  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:09.992743  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.182621  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.185275  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.241946  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:10.493250  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.681964  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.683427  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.741160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:10.992612  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.182647  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.184012  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.241156  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:11.493024  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.673671  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:11.681972  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.683207  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.741597  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:11.993014  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.183057  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.184153  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.241733  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:12.493692  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.681886  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.684534  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.741454  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:12.992955  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:13.181748  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.184419  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.241592  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:13.493095  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:13.674573  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:13.682496  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.684939  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.742119  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:13.992875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.181857  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.184329  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.241446  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:14.493098  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.682063  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.684638  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.741943  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:14.993295  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.181549  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.184251  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.241662  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:15.493311  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.682723  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.684343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.741516  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:15.993459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.174022  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:16.182445  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.183645  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.241552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:16.492751  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.681750  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.684118  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.741639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:16.992746  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.183629  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.184277  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.241973  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:17.492734  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.682865  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.684402  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.741097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:17.995021  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.182167  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.183861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.241904  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:18.493035  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.674273  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:18.681930  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.683425  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.741426  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:18.992961  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.181947  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.184457  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.241288  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:19.492705  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.682304  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.683740  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.741937  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:19.993325  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.182508  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.184363  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.241464  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:20.493136  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.682419  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.684964  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.741720  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:20.993256  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.174475  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:21.183705  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:21.185111  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.242124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:21.493585  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.683271  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:21.684764  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.741651  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:21.992919  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.181755  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.183451  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.241282  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:22.492890  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.682246  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.684834  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.746715  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:22.992549  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.176323  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:23.182514  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.184304  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.242069  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:23.493352  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.682364  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.684730  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.741463  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:23.992965  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.181820  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.183750  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.241253  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:24.492731  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.682281  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.683693  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.741944  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:24.993076  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.182587  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:25.183739  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.241982  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:25.493625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.674600  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:25.681772  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:25.683562  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.741810  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:25.993387  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.182590  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.184710  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.241857  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:26.492826  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.682276  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.683688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.741437  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:26.993180  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.182312  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.183903  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.241894  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:27.493024  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.683256  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.686348  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.741966  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:27.993513  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.173949  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:28.182245  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.183965  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.242001  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:28.492227  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.682531  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.684476  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.741079  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:28.992564  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:29.182119  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.184811  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.241714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:29.493700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:29.681614  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.684137  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.742016  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:29.993085  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.175180  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:30.182373  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:30.184220  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.242043  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:30.492552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.682016  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:30.683560  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.741923  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:30.992979  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.181750  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.184397  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.241582  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:31.492674  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.682147  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.684493  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.741675  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:31.993074  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.182356  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.183682  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.241422  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:32.492651  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.674565  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:32.681835  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.684365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.741649  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:32.993079  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.181884  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.191418  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.241540  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:33.492434  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.682370  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.683679  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.741485  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:33.993563  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.183261  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.184460  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:34.241368  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:34.493179  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.681640  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.684029  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:34.741217  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:34.992492  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.174402  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:35.182410  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:35.184098  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:35.241247  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:35.492586  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.682516  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:35.684853  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:35.741420  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:35.993331  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.182161  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.183974  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:36.241195  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:36.492867  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.681849  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.684459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:36.741676  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:36.992977  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.174448  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:37.181637  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.184374  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:37.241170  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:37.493287  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.682599  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.683939  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:37.742151  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:37.992345  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.182246  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.185055  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:38.241412  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:38.493510  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.682169  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.684727  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:38.741870  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:38.993291  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.174764  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:39.182453  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.184999  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:39.241070  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:39.492924  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.682342  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.683714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:39.741552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:39.993185  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.184140  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.184234  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.241666  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:40.492888  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.682116  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.683573  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.742044  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:40.993289  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.182451  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.184172  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.241855  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:41.493060  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.673640  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:41.681716  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.684360  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.740958  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:41.992711  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.179314  283686 node_ready.go:49] node "addons-627736" has status "Ready":"True"
	I1011 20:59:42.179399  283686 node_ready.go:38] duration metric: took 39.508803304s for node "addons-627736" to be "Ready" ...
	I1011 20:59:42.179426  283686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:42.193131  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.198310  283686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:42.201659  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.249976  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:42.596312  283686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:42.596347  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.719651  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.744392  283686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:42.744480  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.790435  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.019557  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.186990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.189022  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.287030  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.499334  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.686725  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.690062  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.709257  283686 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.709285  283686 pod_ready.go:82] duration metric: took 1.510893235s for pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.709304  283686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.720973  283686 pod_ready.go:93] pod "etcd-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.720999  283686 pod_ready.go:82] duration metric: took 11.687165ms for pod "etcd-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.721014  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.726742  283686 pod_ready.go:93] pod "kube-apiserver-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.726766  283686 pod_ready.go:82] duration metric: took 5.744181ms for pod "kube-apiserver-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.726777  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.748268  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.749213  283686 pod_ready.go:93] pod "kube-controller-manager-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.749236  283686 pod_ready.go:82] duration metric: took 22.451255ms for pod "kube-controller-manager-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.749251  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p49c6" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.760664  283686 pod_ready.go:93] pod "kube-proxy-p49c6" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.760690  283686 pod_ready.go:82] duration metric: took 11.430688ms for pod "kube-proxy-p49c6" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.760703  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.993833  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.104050  283686 pod_ready.go:93] pod "kube-scheduler-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:44.104118  283686 pod_ready.go:82] duration metric: took 343.406965ms for pod "kube-scheduler-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:44.104147  283686 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:44.183102  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.186555  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.241576  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.493805  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.683938  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.685898  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.741968  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.993814  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.183884  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.189316  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.242565  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.493753  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.682701  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.686227  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.741523  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.994322  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.110044  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:46.182616  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.185629  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.242120  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.497907  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.683818  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.687395  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.744688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.994553  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.183214  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.185159  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.242239  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.495631  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.685777  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.687418  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.742781  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.994449  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.112132  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:48.192670  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.193948  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.242987  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.494555  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.683889  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.686312  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.742375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.996072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.183133  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.186476  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.241990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.494583  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.682811  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.685060  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.742058  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.994247  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.182747  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.185182  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.241730  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.493658  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.610921  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:50.683159  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.685700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.742312  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.994160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.190942  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.199356  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.242029  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.494696  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.683743  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.685318  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.741837  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.993888  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.183778  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.185430  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.241527  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.494951  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.611283  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:52.682430  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.684406  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.741700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.994135  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.183200  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.186432  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.241732  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.493625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.684232  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.685973  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.742356  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.994723  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.183115  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.184937  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.242372  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.494488  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.683980  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.686343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.742097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.994181  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.111646  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:55.183128  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.186859  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.242160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.494393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.682495  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.683985  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.742424  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.994026  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.182440  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.185625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.242459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.495930  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.694338  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.697540  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.742185  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.995236  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.117222  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:57.186268  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.191246  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.242488  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.497470  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.686445  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.689875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.742776  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.994343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.189584  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.191223  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.245179  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.496072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.682735  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.685405  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.742370  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.995920  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.183399  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.185421  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.241922  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.494451  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.610657  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:59.683093  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.685441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.741726  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.993959  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.193795  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.200588  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.253340  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.499644  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.687591  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.689931  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.745097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.995432  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.186258  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.186667  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.242372  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.496051  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.612733  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:01.690768  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.692834  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.742289  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.995673  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.186089  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.188213  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.242896  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.495915  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.684190  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.692008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.741590  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.996039  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.196082  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.197968  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.241881  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.507365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.615334  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:03.683208  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.685729  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.742108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.993770  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.184732  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.185089  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.242393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.493713  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.685917  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.688861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.742998  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.995200  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.184958  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.189363  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.242522  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.494624  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.683544  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.686218  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.741836  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.994541  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.114812  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:06.183082  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.184605  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.241925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.493597  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.692572  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.693443  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.785954  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.993595  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.183286  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.185093  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.241714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.494154  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.682752  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.684972  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.742153  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.993857  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.182530  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.184733  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.241927  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.493917  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.624802  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:08.694364  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.695104  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.742802  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.995515  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.196012  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.201441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.243086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.495892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.718784  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.720547  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.799122  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.995473  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.184807  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.187801  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.242580  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.494931  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.684624  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.687249  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.747744  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.995669  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.114546  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:11.182751  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.186321  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.245362  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.494467  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.683316  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.688864  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.742621  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.998253  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.182795  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.186205  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.241894  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.494235  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.683620  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.685612  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.742027  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.994984  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.183372  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.185346  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.241619  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.495083  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.614045  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:13.683339  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.687369  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.744324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.997347  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.185094  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.185496  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.241847  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.497025  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.683090  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.686614  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.742416  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.994358  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.186258  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.187922  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.242542  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.497130  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.685136  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.688008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.742886  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.994861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.113630  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:16.186624  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.187492  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.242192  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.495067  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.685043  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.687654  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.742454  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.994440  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.185828  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.188238  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.241798  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.495215  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.682774  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.684698  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.741767  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.994817  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.183625  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.185371  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.241982  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.494791  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.610621  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:18.683173  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.685074  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.741300  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.994020  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.183182  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.185005  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.241399  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.494572  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.685907  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.689066  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.741805  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.993862  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.184033  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.186554  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.242159  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.495473  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.610667  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:20.683864  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.685604  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.742184  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.994375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.182916  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.184822  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.242287  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.493767  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.682572  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.684572  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.743892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.993704  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.184216  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.185387  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.241994  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.494082  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.613037  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:22.684065  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.686350  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.783621  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.000875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.182730  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.184996  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.241191  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.495196  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.682923  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.685133  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.743663  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.993614  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.183979  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.185519  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.243000  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.493990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.683195  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.686268  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.742093  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.995048  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.110628  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:25.184752  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.185739  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.242025  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.495015  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.682812  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.685241  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.742124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.994902  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.182333  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.184623  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.241914  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.493365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.682951  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.684589  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.741708  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.995401  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.111069  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:27.183478  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.186340  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.241434  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.495892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.684907  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.686037  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.745184  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.997566  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.185650  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.191867  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.242293  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.494273  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.683733  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.685365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.741970  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.995236  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.112344  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:29.183377  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.185146  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:29.242008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.494207  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.682967  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.686063  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:29.741539  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.994046  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.184213  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.186672  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:30.242441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.494468  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.683868  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.688984  283686 kapi.go:107] duration metric: took 1m28.008145424s to wait for kubernetes.io/minikube-addons=registry ...
	I1011 21:00:30.741386  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.995591  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.183001  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.242329  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.496152  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.613711  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:31.683873  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.743124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.996789  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.184118  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.242547  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.494440  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.683833  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.742546  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.997139  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.184720  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.242611  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.495118  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.682615  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.742743  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.994658  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.111273  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:34.183799  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.247855  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.494352  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.684256  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.741945  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.994905  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.183388  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.241326  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.495231  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.683562  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.742086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.994695  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.115471  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:36.184554  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.241872  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.493976  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.682557  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.742188  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.993881  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.184475  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.241829  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.494635  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.683549  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.742141  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.995062  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.183151  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.241804  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.494086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.612073  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:38.684580  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.745810  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.994664  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.183742  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.282752  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.494687  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.683585  283686 kapi.go:107] duration metric: took 1m37.005572759s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1011 21:00:39.742538  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.995708  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.242648  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.495807  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.612427  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:40.750228  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.994549  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.298108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.494448  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.741864  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.994483  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.241532  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.494675  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.742721  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.995753  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.111490  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:43.248237  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.495930  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.741833  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.994355  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.241138  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.494599  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.741504  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.995053  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.113671  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:45.242744  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.493636  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.741925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.993727  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.241393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.495043  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.742401  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.995324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.242042  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.494424  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.611133  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:47.741713  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.996051  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.241734  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.493977  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.742221  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.994375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.241791  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.494375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.742366  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.996759  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.110838  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:50.246769  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.494639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.741877  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.994681  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.244654  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.494290  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.752438  283686 kapi.go:107] duration metric: took 1m45.514367168s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1011 21:00:51.755643  283686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-627736 cluster.
	I1011 21:00:51.759261  283686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1011 21:00:51.760775  283686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1011 21:00:51.994653  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.501765  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.622009  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:52.994761  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.494925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.994877  283686 kapi.go:107] duration metric: took 1m51.005901127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1011 21:00:53.996214  283686 out.go:177] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1011 21:00:53.997472  283686 addons.go:510] duration metric: took 1m58.471592636s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1011 21:00:55.110733  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:57.610448  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:59.610618  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:01.611417  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:03.615797  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:05.610009  283686 pod_ready.go:93] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"True"
	I1011 21:01:05.610036  283686 pod_ready.go:82] duration metric: took 1m21.505867715s for pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace to be "Ready" ...
	I1011 21:01:05.610053  283686 pod_ready.go:39] duration metric: took 1m23.430586026s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:01:05.610069  283686 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:01:05.610104  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:05.610169  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:05.670588  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:05.670627  283686 cri.go:89] found id: ""
	I1011 21:01:05.670636  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:05.670702  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.674259  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:05.674337  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:05.716284  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:05.716307  283686 cri.go:89] found id: ""
	I1011 21:01:05.716315  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:05.716372  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.719810  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:05.719937  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:05.773858  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:05.773882  283686 cri.go:89] found id: ""
	I1011 21:01:05.773891  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:05.773961  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.777260  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:05.777398  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:05.816412  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:05.816491  283686 cri.go:89] found id: ""
	I1011 21:01:05.816515  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:05.816610  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.820235  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:05.820306  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:05.860720  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:05.860743  283686 cri.go:89] found id: ""
	I1011 21:01:05.860752  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:05.860809  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.864766  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:05.864873  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:05.905768  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:05.905801  283686 cri.go:89] found id: ""
	I1011 21:01:05.905811  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:05.905877  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.909382  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:05.909464  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:05.952306  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:05.952329  283686 cri.go:89] found id: ""
	I1011 21:01:05.952337  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:05.952433  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.955834  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:05.955862  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:06.001543  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:06.001575  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:06.054417  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:06.054448  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:06.192875  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:06.192916  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:06.261533  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:06.261575  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:06.313314  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.313591  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:06.313771  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.313990  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:06.375037  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:06.375070  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:06.393698  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:06.393732  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:06.574353  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:06.574384  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:06.635302  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:06.635332  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:06.689059  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:06.689093  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:06.731413  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:06.731442  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:06.772473  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:06.772503  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:06.864616  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:06.864650  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:06.864735  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:06.864747  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.864761  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:06.864781  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.864795  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:06.864813  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:06.864822  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:16.865634  283686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:01:16.879583  283686 api_server.go:72] duration metric: took 2m21.353946238s to wait for apiserver process to appear ...
	I1011 21:01:16.879609  283686 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:01:16.879645  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:16.879702  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:16.919894  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:16.919915  283686 cri.go:89] found id: ""
	I1011 21:01:16.919925  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:16.919985  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:16.923744  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:16.923827  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:16.963028  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:16.963055  283686 cri.go:89] found id: ""
	I1011 21:01:16.963065  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:16.963123  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:16.966723  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:16.966796  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:17.015407  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:17.015432  283686 cri.go:89] found id: ""
	I1011 21:01:17.015451  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:17.015513  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.018891  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:17.018963  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:17.060477  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:17.060497  283686 cri.go:89] found id: ""
	I1011 21:01:17.060506  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:17.060562  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.064163  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:17.064243  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:17.103509  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:17.103533  283686 cri.go:89] found id: ""
	I1011 21:01:17.103543  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:17.103606  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.107091  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:17.107159  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:17.146822  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:17.146884  283686 cri.go:89] found id: ""
	I1011 21:01:17.146893  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:17.146958  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.150695  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:17.150775  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:17.196771  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:17.196795  283686 cri.go:89] found id: ""
	I1011 21:01:17.196804  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:17.196857  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.200447  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:17.200484  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:17.335296  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:17.335328  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:17.404435  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:17.404470  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:17.445029  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:17.445060  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:17.493920  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:17.493951  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:17.535275  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:17.535304  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:17.633273  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:17.633309  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:17.734290  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:17.734331  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:17.791643  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:17.791917  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:17.792100  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:17.792319  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:17.854309  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:17.854352  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:17.911517  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:17.911548  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:17.955470  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:17.955501  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:18.019275  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:18.019359  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:18.036766  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:18.036801  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:18.036901  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:18.036919  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:18.036956  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:18.036983  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:18.036993  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:18.037003  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:18.037024  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:28.038301  283686 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1011 21:01:28.046207  283686 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1011 21:01:28.047432  283686 api_server.go:141] control plane version: v1.31.1
	I1011 21:01:28.047459  283686 api_server.go:131] duration metric: took 11.167842186s to wait for apiserver health ...
	I1011 21:01:28.047468  283686 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:01:28.047490  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:28.047555  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:28.101765  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:28.101789  283686 cri.go:89] found id: ""
	I1011 21:01:28.101798  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:28.101857  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.105239  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:28.105317  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:28.149350  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:28.149373  283686 cri.go:89] found id: ""
	I1011 21:01:28.149382  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:28.149441  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.153152  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:28.153227  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:28.192693  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:28.192717  283686 cri.go:89] found id: ""
	I1011 21:01:28.192725  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:28.192785  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.196264  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:28.196332  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:28.234414  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:28.234434  283686 cri.go:89] found id: ""
	I1011 21:01:28.234443  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:28.234497  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.237874  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:28.237995  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:28.277109  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:28.277134  283686 cri.go:89] found id: ""
	I1011 21:01:28.277143  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:28.277199  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.280762  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:28.280850  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:28.321646  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:28.321671  283686 cri.go:89] found id: ""
	I1011 21:01:28.321680  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:28.321742  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.325266  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:28.325364  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:28.364739  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:28.364763  283686 cri.go:89] found id: ""
	I1011 21:01:28.364772  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:28.364831  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.368342  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:28.368368  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:28.410471  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:28.410498  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:28.462321  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:28.462562  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:28.462740  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:28.462965  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:28.525427  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:28.525457  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:28.543659  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:28.543693  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:28.700375  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:28.700591  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:28.789188  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:28.789222  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:28.861006  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:28.861044  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:28.917375  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:28.917411  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:28.972973  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:28.973008  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:29.020737  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:29.020769  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:29.067399  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:29.067433  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:29.109861  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:29.109889  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:29.200877  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:29.200911  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:29.200992  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:29.201006  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:29.201022  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:29.201044  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:29.201057  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:29.201065  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:29.201084  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:39.213529  283686 system_pods.go:59] 18 kube-system pods found
	I1011 21:01:39.213570  283686 system_pods.go:61] "coredns-7c65d6cfc9-rsfcm" [996e1047-8f10-483c-b830-62ec9c4b730f] Running
	I1011 21:01:39.213577  283686 system_pods.go:61] "csi-hostpath-attacher-0" [87ab25f5-e4b7-4fca-9c15-c48d19c12b6b] Running
	I1011 21:01:39.213582  283686 system_pods.go:61] "csi-hostpath-resizer-0" [a69f5a66-0e63-4da4-a58d-9b45ff6cea64] Running
	I1011 21:01:39.213612  283686 system_pods.go:61] "csi-hostpathplugin-62fx7" [86c47021-238a-4871-ac72-f78324ed2dd6] Running
	I1011 21:01:39.213625  283686 system_pods.go:61] "etcd-addons-627736" [827639aa-3bdc-40ac-aa45-a6fea950ca93] Running
	I1011 21:01:39.213631  283686 system_pods.go:61] "kindnet-dl4r6" [062ac268-a384-40a2-a21f-958b9a3a66b1] Running
	I1011 21:01:39.213635  283686 system_pods.go:61] "kube-apiserver-addons-627736" [995afcf6-521b-49ba-a610-46c76edc3841] Running
	I1011 21:01:39.213644  283686 system_pods.go:61] "kube-controller-manager-addons-627736" [8a9b26d8-92ce-4ff1-930e-b4b9d34f5b9c] Running
	I1011 21:01:39.213648  283686 system_pods.go:61] "kube-ingress-dns-minikube" [9ee3781e-ba5e-4b03-a5f5-cc32cc20407b] Running
	I1011 21:01:39.213651  283686 system_pods.go:61] "kube-proxy-p49c6" [995ebad4-48a5-48d5-a2aa-aef4671e5f5f] Running
	I1011 21:01:39.213655  283686 system_pods.go:61] "kube-scheduler-addons-627736" [2878cfca-4eed-4105-8c19-850954387751] Running
	I1011 21:01:39.213659  283686 system_pods.go:61] "metrics-server-84c5f94fbc-96mlh" [6cae95da-c64a-42fb-a86c-a65aa4fa0447] Running
	I1011 21:01:39.213666  283686 system_pods.go:61] "nvidia-device-plugin-daemonset-p9nsd" [41af943b-e0c9-4974-aa28-297cadfc3d28] Running
	I1011 21:01:39.213670  283686 system_pods.go:61] "registry-66c9cd494c-p6l9v" [0674412c-ee63-4347-b013-fcbb85bd1f6a] Running
	I1011 21:01:39.213695  283686 system_pods.go:61] "registry-proxy-hxsb7" [9f05d6fb-3f2f-4840-a6f5-392af1bf7e10] Running
	I1011 21:01:39.213705  283686 system_pods.go:61] "snapshot-controller-56fcc65765-5ldbm" [2c8ed9f6-cfa8-44fc-aa89-06743412532e] Running
	I1011 21:01:39.213709  283686 system_pods.go:61] "snapshot-controller-56fcc65765-df6h6" [14e51f4a-4153-44fa-a8e0-9db0a24b48d7] Running
	I1011 21:01:39.213713  283686 system_pods.go:61] "storage-provisioner" [f1e91d7e-5124-4e47-9e2f-6ef18efad060] Running
	I1011 21:01:39.213725  283686 system_pods.go:74] duration metric: took 11.166248362s to wait for pod list to return data ...
	I1011 21:01:39.213737  283686 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:01:39.216674  283686 default_sa.go:45] found service account: "default"
	I1011 21:01:39.216702  283686 default_sa.go:55] duration metric: took 2.958981ms for default service account to be created ...
	I1011 21:01:39.216713  283686 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:01:39.228323  283686 system_pods.go:86] 18 kube-system pods found
	I1011 21:01:39.228422  283686 system_pods.go:89] "coredns-7c65d6cfc9-rsfcm" [996e1047-8f10-483c-b830-62ec9c4b730f] Running
	I1011 21:01:39.228447  283686 system_pods.go:89] "csi-hostpath-attacher-0" [87ab25f5-e4b7-4fca-9c15-c48d19c12b6b] Running
	I1011 21:01:39.228470  283686 system_pods.go:89] "csi-hostpath-resizer-0" [a69f5a66-0e63-4da4-a58d-9b45ff6cea64] Running
	I1011 21:01:39.228505  283686 system_pods.go:89] "csi-hostpathplugin-62fx7" [86c47021-238a-4871-ac72-f78324ed2dd6] Running
	I1011 21:01:39.228551  283686 system_pods.go:89] "etcd-addons-627736" [827639aa-3bdc-40ac-aa45-a6fea950ca93] Running
	I1011 21:01:39.228575  283686 system_pods.go:89] "kindnet-dl4r6" [062ac268-a384-40a2-a21f-958b9a3a66b1] Running
	I1011 21:01:39.228604  283686 system_pods.go:89] "kube-apiserver-addons-627736" [995afcf6-521b-49ba-a610-46c76edc3841] Running
	I1011 21:01:39.228639  283686 system_pods.go:89] "kube-controller-manager-addons-627736" [8a9b26d8-92ce-4ff1-930e-b4b9d34f5b9c] Running
	I1011 21:01:39.228666  283686 system_pods.go:89] "kube-ingress-dns-minikube" [9ee3781e-ba5e-4b03-a5f5-cc32cc20407b] Running
	I1011 21:01:39.228686  283686 system_pods.go:89] "kube-proxy-p49c6" [995ebad4-48a5-48d5-a2aa-aef4671e5f5f] Running
	I1011 21:01:39.228716  283686 system_pods.go:89] "kube-scheduler-addons-627736" [2878cfca-4eed-4105-8c19-850954387751] Running
	I1011 21:01:39.228741  283686 system_pods.go:89] "metrics-server-84c5f94fbc-96mlh" [6cae95da-c64a-42fb-a86c-a65aa4fa0447] Running
	I1011 21:01:39.228764  283686 system_pods.go:89] "nvidia-device-plugin-daemonset-p9nsd" [41af943b-e0c9-4974-aa28-297cadfc3d28] Running
	I1011 21:01:39.228790  283686 system_pods.go:89] "registry-66c9cd494c-p6l9v" [0674412c-ee63-4347-b013-fcbb85bd1f6a] Running
	I1011 21:01:39.228820  283686 system_pods.go:89] "registry-proxy-hxsb7" [9f05d6fb-3f2f-4840-a6f5-392af1bf7e10] Running
	I1011 21:01:39.228848  283686 system_pods.go:89] "snapshot-controller-56fcc65765-5ldbm" [2c8ed9f6-cfa8-44fc-aa89-06743412532e] Running
	I1011 21:01:39.228871  283686 system_pods.go:89] "snapshot-controller-56fcc65765-df6h6" [14e51f4a-4153-44fa-a8e0-9db0a24b48d7] Running
	I1011 21:01:39.228897  283686 system_pods.go:89] "storage-provisioner" [f1e91d7e-5124-4e47-9e2f-6ef18efad060] Running
	I1011 21:01:39.228934  283686 system_pods.go:126] duration metric: took 12.213856ms to wait for k8s-apps to be running ...
	I1011 21:01:39.228970  283686 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:01:39.229073  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:01:39.244661  283686 system_svc.go:56] duration metric: took 15.681935ms WaitForService to wait for kubelet
	I1011 21:01:39.244734  283686 kubeadm.go:582] duration metric: took 2m43.719104575s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:01:39.244761  283686 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:01:39.248173  283686 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1011 21:01:39.248208  283686 node_conditions.go:123] node cpu capacity is 2
	I1011 21:01:39.248221  283686 node_conditions.go:105] duration metric: took 3.453827ms to run NodePressure ...
	I1011 21:01:39.248231  283686 start.go:241] waiting for startup goroutines ...
	I1011 21:01:39.248273  283686 start.go:246] waiting for cluster config update ...
	I1011 21:01:39.248300  283686 start.go:255] writing updated cluster config ...
	I1011 21:01:39.248636  283686 ssh_runner.go:195] Run: rm -f paused
	I1011 21:01:39.648489  283686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:01:39.651504  283686 out.go:177] * Done! kubectl is now configured to use "addons-627736" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:03:50 addons-627736 crio[963]: time="2024-10-11 21:03:50.748759637Z" level=info msg="Removed pod sandbox: 40d85f22d44822f75e2d699a45a03d2ba5ad87e701841898093d28044d0a2cdf" id=c8e35f0c-bcfe-414e-93af-8a21c9bea8c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.275546457Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-w257g/POD" id=4f7f8820-cfbd-401a-a789-c40761638c63 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.275608855Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.304126066Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-w257g Namespace:default ID:00fd4929d95031764305751d8665b89d4d2f6945dec5b497bc0f99c50c73296d UID:d62c91ba-1cc5-4723-ae3f-8516317b1c9c NetNS:/var/run/netns/4df21f0b-4936-4d47-919d-085a2627039b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.304307550Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-w257g to CNI network \"kindnet\" (type=ptp)"
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.322148176Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-w257g Namespace:default ID:00fd4929d95031764305751d8665b89d4d2f6945dec5b497bc0f99c50c73296d UID:d62c91ba-1cc5-4723-ae3f-8516317b1c9c NetNS:/var/run/netns/4df21f0b-4936-4d47-919d-085a2627039b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.322838036Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-w257g for CNI network kindnet (type=ptp)"
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.325655582Z" level=info msg="Ran pod sandbox 00fd4929d95031764305751d8665b89d4d2f6945dec5b497bc0f99c50c73296d with infra container: default/hello-world-app-55bf9c44b4-w257g/POD" id=4f7f8820-cfbd-401a-a789-c40761638c63 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.327471015Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9c9f5686-f5aa-4627-ab10-7acebe112c68 name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.327693252Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9c9f5686-f5aa-4627-ab10-7acebe112c68 name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.329609810Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6d383b26-7fd9-4e45-b3b5-6c70b57218b2 name=/runtime.v1.ImageService/PullImage
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.333436149Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 11 21:05:57 addons-627736 crio[963]: time="2024-10-11 21:05:57.633187195Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.363868859Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=6d383b26-7fd9-4e45-b3b5-6c70b57218b2 name=/runtime.v1.ImageService/PullImage
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.364863957Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=71be437c-1cd4-4c46-94dc-1b59db69c6ab name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.365522146Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=71be437c-1cd4-4c46-94dc-1b59db69c6ab name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.367269888Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5f30df49-cead-4bc0-a57d-af64197fd572 name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.367876542Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5f30df49-cead-4bc0-a57d-af64197fd572 name=/runtime.v1.ImageService/ImageStatus
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.368992859Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-w257g/hello-world-app" id=96b9c29d-de88-4c8c-adeb-70fb4280fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.369088848Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.392298088Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/05ce69ac98445bef4c7c95f8c4279a68056b84da62306e47223fa880594ef2b7/merged/etc/passwd: no such file or directory"
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.392474830Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/05ce69ac98445bef4c7c95f8c4279a68056b84da62306e47223fa880594ef2b7/merged/etc/group: no such file or directory"
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.448268334Z" level=info msg="Created container 16121ee7e05b280e3e18b9f9fac4cea52baeb758152a07b614e19ede2db0e8f9: default/hello-world-app-55bf9c44b4-w257g/hello-world-app" id=96b9c29d-de88-4c8c-adeb-70fb4280fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.448831075Z" level=info msg="Starting container: 16121ee7e05b280e3e18b9f9fac4cea52baeb758152a07b614e19ede2db0e8f9" id=4159cae6-b94e-4c49-8a36-6ace18375c02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 11 21:05:58 addons-627736 crio[963]: time="2024-10-11 21:05:58.465155412Z" level=info msg="Started container" PID=9222 containerID=16121ee7e05b280e3e18b9f9fac4cea52baeb758152a07b614e19ede2db0e8f9 description=default/hello-world-app-55bf9c44b4-w257g/hello-world-app id=4159cae6-b94e-4c49-8a36-6ace18375c02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00fd4929d95031764305751d8665b89d4d2f6945dec5b497bc0f99c50c73296d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	16121ee7e05b2       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   00fd4929d9503       hello-world-app-55bf9c44b4-w257g
	c3aec7d30a326       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   135524c97e22c       nginx
	947c9d648a4a3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   a6609a9a15c6c       busybox
	7b4b2ad554bf6       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             5 minutes ago            Running             controller                0                   8400a8cff258d       ingress-nginx-controller-5f85ff4588-c95cv
	4b8d57fb91acb       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   dad0d33829e89       kube-ingress-dns-minikube
	88f6f9dcbf080       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             5 minutes ago            Exited              patch                     1                   3923ca9169476       ingress-nginx-admission-patch-h4f2j
	64b8cb7ba62e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                    0                   270e8b82f28f6       ingress-nginx-admission-create-vrswx
	690907b416de7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner    0                   08b2565019574       local-path-provisioner-86d989889c-nhfdh
	45f31afe6d34c       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        5 minutes ago            Running             metrics-server            0                   4ac405f0f5e95       metrics-server-84c5f94fbc-96mlh
	381fa28b97303       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago            Running             storage-provisioner       0                   b17ecc6108a50       storage-provisioner
	4cc9120dd28ec       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             6 minutes ago            Running             coredns                   0                   49e9bd477edf2       coredns-7c65d6cfc9-rsfcm
	aefe62e0ae416       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                           6 minutes ago            Running             kindnet-cni               0                   aad151751b198       kindnet-dl4r6
	b1b0f6640b0b2       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             7 minutes ago            Running             kube-proxy                0                   c8a95826fbea3       kube-proxy-p49c6
	44786c037f505       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             7 minutes ago            Running             kube-controller-manager   0                   27d89d1299f3e       kube-controller-manager-addons-627736
	b1eae13f5a89d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             7 minutes ago            Running             etcd                      0                   25a7fa043c3e7       etcd-addons-627736
	52a847d70739c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             7 minutes ago            Running             kube-scheduler            0                   3889f77f1c862       kube-scheduler-addons-627736
	98ba21f18fbc7       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             7 minutes ago            Running             kube-apiserver            0                   734c3441632e0       kube-apiserver-addons-627736
	
	
	==> coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] <==
	[INFO] 10.244.0.17:50147 - 22463 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002086365s
	[INFO] 10.244.0.17:50147 - 18516 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000165296s
	[INFO] 10.244.0.17:50147 - 7897 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170974s
	[INFO] 10.244.0.17:39255 - 10193 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009882s
	[INFO] 10.244.0.17:39255 - 9964 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188277s
	[INFO] 10.244.0.17:49443 - 54313 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067526s
	[INFO] 10.244.0.17:49443 - 54140 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000098902s
	[INFO] 10.244.0.17:53057 - 26572 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.002034445s
	[INFO] 10.244.0.17:53057 - 26143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.002124222s
	[INFO] 10.244.0.17:55107 - 40234 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001398072s
	[INFO] 10.244.0.17:55107 - 40053 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00158014s
	[INFO] 10.244.0.17:60653 - 64023 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007368s
	[INFO] 10.244.0.17:60653 - 63884 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116362s
	[INFO] 10.244.0.21:32810 - 29509 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00019841s
	[INFO] 10.244.0.21:52977 - 50079 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125855s
	[INFO] 10.244.0.21:36228 - 39214 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124132s
	[INFO] 10.244.0.21:56475 - 43477 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009955s
	[INFO] 10.244.0.21:43445 - 2460 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000482086s
	[INFO] 10.244.0.21:34308 - 46772 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129645s
	[INFO] 10.244.0.21:48905 - 30229 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002144546s
	[INFO] 10.244.0.21:55405 - 13567 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002585149s
	[INFO] 10.244.0.21:39348 - 7692 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00558398s
	[INFO] 10.244.0.21:45596 - 1103 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005738643s
	[INFO] 10.244.0.23:35431 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000211424s
	[INFO] 10.244.0.23:38249 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000282003s
	
	
	==> describe nodes <==
	Name:               addons-627736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-627736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=addons-627736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T20_58_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-627736
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 20:58:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-627736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:05:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:03:55 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:03:55 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:03:55 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:03:55 +0000   Fri, 11 Oct 2024 20:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-627736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 11b52b7913f542f1a28c3241c35ea74a
	  System UUID:                9b6d6844-2b7e-4842-b3ee-0008fd8800bf
	  Boot ID:                    cbc008aa-cc36-43a1-a971-3215ed2e69cb
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  default                     hello-world-app-55bf9c44b4-w257g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-c95cv    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m57s
	  kube-system                 coredns-7c65d6cfc9-rsfcm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m4s
	  kube-system                 etcd-addons-627736                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m9s
	  kube-system                 kindnet-dl4r6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m4s
	  kube-system                 kube-apiserver-addons-627736                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-controller-manager-addons-627736        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 kube-proxy-p49c6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-scheduler-addons-627736                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 metrics-server-84c5f94fbc-96mlh              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  local-path-storage          local-path-provisioner-86d989889c-nhfdh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m57s                  kube-proxy       
	  Normal   Starting                 7m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m16s (x8 over 7m16s)  kubelet          Node addons-627736 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m16s (x8 over 7m16s)  kubelet          Node addons-627736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m16s (x7 over 7m16s)  kubelet          Node addons-627736 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m9s                   kubelet          Node addons-627736 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m9s                   kubelet          Node addons-627736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m9s                   kubelet          Node addons-627736 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m5s                   node-controller  Node addons-627736 event: Registered Node addons-627736 in Controller
	  Normal   NodeReady                6m17s                  kubelet          Node addons-627736 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015629] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.448894] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049457] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016122] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.649193] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.194454] kauditd_printk_skb: 34 callbacks suppressed
	[Oct11 19:26] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct11 19:59] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.264105] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] <==
	{"level":"info","ts":"2024-10-11T20:58:59.202522Z","caller":"traceutil/trace.go:171","msg":"trace[1249837494] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:404; }","duration":"164.586849ms","start":"2024-10-11T20:58:59.037806Z","end":"2024-10-11T20:58:59.202393Z","steps":["trace[1249837494] 'read index received'  (duration: 282.905µs)","trace[1249837494] 'applied index is now lower than readState.Index'  (duration: 164.303394ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-11T20:58:59.205711Z","caller":"traceutil/trace.go:171","msg":"trace[673274089] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"384.526256ms","start":"2024-10-11T20:58:58.820321Z","end":"2024-10-11T20:58:59.204848Z","steps":["trace[673274089] 'process raft request'  (duration: 299.565359ms)","trace[673274089] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/pods/kube-system/kube-scheduler-addons-627736; req_size:4470; } (duration: 82.298852ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.207044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.820302Z","time spent":"386.531145ms","remote":"127.0.0.1:49888","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4473,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" mod_revision:307 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" value_size:4410 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" > >"}
	{"level":"info","ts":"2024-10-11T20:58:59.207254Z","caller":"traceutil/trace.go:171","msg":"trace[706224663] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"386.763557ms","start":"2024-10-11T20:58:58.820479Z","end":"2024-10-11T20:58:59.207242Z","steps":["trace[706224663] 'process raft request'  (duration: 381.842041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.207311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.820465Z","time spent":"386.809102ms","remote":"127.0.0.1:49976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" mod_revision:59 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" > >"}
	{"level":"info","ts":"2024-10-11T20:58:59.263129Z","caller":"traceutil/trace.go:171","msg":"trace[1140111483] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"143.888588ms","start":"2024-10-11T20:58:59.119226Z","end":"2024-10-11T20:58:59.263114Z","steps":["trace[1140111483] 'process raft request'  (duration: 87.690615ms)","trace[1140111483] 'compare'  (duration: 55.52637ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.263303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.875855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:58:59.263338Z","caller":"traceutil/trace.go:171","msg":"trace[1102903783] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:400; }","duration":"143.914188ms","start":"2024-10-11T20:58:59.119417Z","end":"2024-10-11T20:58:59.263331Z","steps":["trace[1102903783] 'agreement among raft nodes before linearized reading'  (duration: 143.861931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.262900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.258601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-11T20:58:59.263766Z","caller":"traceutil/trace.go:171","msg":"trace[2126669368] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:399; }","duration":"469.136885ms","start":"2024-10-11T20:58:58.794617Z","end":"2024-10-11T20:58:59.263754Z","steps":["trace[2126669368] 'agreement among raft nodes before linearized reading'  (duration: 415.120141ms)","trace[2126669368] 'range keys from in-memory index tree'  (duration: 53.085325ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.263808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.794303Z","time spent":"469.486889ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"warn","ts":"2024-10-11T20:58:59.649987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.48397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-627736\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-10-11T20:58:59.650167Z","caller":"traceutil/trace.go:171","msg":"trace[693198531] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-627736; range_end:; response_count:1; response_revision:411; }","duration":"104.992436ms","start":"2024-10-11T20:58:59.545158Z","end":"2024-10-11T20:58:59.650150Z","steps":["trace[693198531] 'agreement among raft nodes before linearized reading'  (duration: 100.307368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.671711Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.364486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:58:59.672285Z","caller":"traceutil/trace.go:171","msg":"trace[1549218058] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:411; }","duration":"101.933454ms","start":"2024-10-11T20:58:59.570326Z","end":"2024-10-11T20:58:59.672260Z","steps":["trace[1549218058] 'agreement among raft nodes before linearized reading'  (duration: 101.352466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.672499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.820186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-10-11T20:58:59.689787Z","caller":"traceutil/trace.go:171","msg":"trace[174889307] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:411; }","duration":"119.103438ms","start":"2024-10-11T20:58:59.570668Z","end":"2024-10-11T20:58:59.689772Z","steps":["trace[174889307] 'agreement among raft nodes before linearized reading'  (duration: 101.793856ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T20:59:01.026175Z","caller":"traceutil/trace.go:171","msg":"trace[573435994] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"101.369147ms","start":"2024-10-11T20:59:00.924781Z","end":"2024-10-11T20:59:01.026150Z","steps":["trace[573435994] 'process raft request'  (duration: 93.743233ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T20:59:01.026569Z","caller":"traceutil/trace.go:171","msg":"trace[1305598899] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"101.682172ms","start":"2024-10-11T20:59:00.924876Z","end":"2024-10-11T20:59:01.026558Z","steps":["trace[1305598899] 'process raft request'  (duration: 93.673573ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.026725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.984703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:59:01.026756Z","caller":"traceutil/trace.go:171","msg":"trace[535795001] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:453; }","duration":"102.033079ms","start":"2024-10-11T20:59:00.924716Z","end":"2024-10-11T20:59:01.026749Z","steps":["trace[535795001] 'agreement among raft nodes before linearized reading'  (duration: 101.966209ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.027270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.704461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-11T20:59:01.027318Z","caller":"traceutil/trace.go:171","msg":"trace[2123143941] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:453; }","duration":"102.757292ms","start":"2024-10-11T20:59:00.924552Z","end":"2024-10-11T20:59:01.027310Z","steps":["trace[2123143941] 'agreement among raft nodes before linearized reading'  (duration: 102.622789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.027457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.005842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:59:01.027490Z","caller":"traceutil/trace.go:171","msg":"trace[142831583] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:453; }","duration":"103.033157ms","start":"2024-10-11T20:59:00.924444Z","end":"2024-10-11T20:59:01.027477Z","steps":["trace[142831583] 'agreement among raft nodes before linearized reading'  (duration: 102.991107ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:05:59 up  2:48,  0 users,  load average: 0.28, 0.71, 0.57
	Linux addons-627736 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] <==
	I1011 21:03:51.728383       1 main.go:300] handling current node
	I1011 21:04:01.724490       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:01.724520       1 main.go:300] handling current node
	I1011 21:04:11.727768       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:11.727803       1 main.go:300] handling current node
	I1011 21:04:21.732058       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:21.732092       1 main.go:300] handling current node
	I1011 21:04:31.728624       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:31.728658       1 main.go:300] handling current node
	I1011 21:04:41.723994       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:41.725076       1 main.go:300] handling current node
	I1011 21:04:51.724655       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:04:51.724689       1 main.go:300] handling current node
	I1011 21:05:01.723898       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:01.723938       1 main.go:300] handling current node
	I1011 21:05:11.729642       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:11.729676       1 main.go:300] handling current node
	I1011 21:05:21.723776       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:21.723906       1 main.go:300] handling current node
	I1011 21:05:31.724293       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:31.724329       1 main.go:300] handling current node
	I1011 21:05:41.730945       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:41.730979       1 main.go:300] handling current node
	I1011 21:05:51.724373       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:05:51.724419       1 main.go:300] handling current node
	
	
	==> kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] <==
	 > logger="UnhandledError"
	E1011 21:01:05.295578       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.164.40:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.164.40:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.164.40:443: connect: connection refused" logger="UnhandledError"
	I1011 21:01:05.362892       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1011 21:01:50.667244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51094: use of closed network connection
	E1011 21:01:50.908243       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51134: use of closed network connection
	E1011 21:01:51.061439       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51142: use of closed network connection
	E1011 21:02:16.959242       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1011 21:02:25.450041       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.79.233"}
	I1011 21:02:48.661028       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1011 21:03:19.580521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.581669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.625633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.625807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.651968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.652773       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.751605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.752094       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1011 21:03:20.654410       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1011 21:03:20.752087       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1011 21:03:20.862001       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1011 21:03:33.330447       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1011 21:03:34.357473       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1011 21:03:38.863277       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1011 21:03:39.173708       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.46.91"}
	I1011 21:05:57.211289       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.110.249"}
	
	
	==> kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] <==
	W1011 21:04:29.599159       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:29.599206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:44.321788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:44.321832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:49.160120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:49.160162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:00.359379       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:00.359427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:01.978726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:01.978776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:19.800089       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:19.800132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:37.112616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:37.112658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:38.608084       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:38.608130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:53.551661       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:53.551705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1011 21:05:56.977678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.538727ms"
	I1011 21:05:56.999769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.977257ms"
	I1011 21:05:56.999840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.715µs"
	W1011 21:05:58.014073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:58.014114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1011 21:05:58.763331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.379992ms"
	I1011 21:05:58.763506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.563µs"
	
	
	==> kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] <==
	I1011 20:59:00.859908       1 server_linux.go:66] "Using iptables proxy"
	I1011 20:59:01.400312       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1011 20:59:01.434619       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 20:59:01.832769       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 20:59:01.832849       1 server_linux.go:169] "Using iptables Proxier"
	I1011 20:59:01.925993       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 20:59:01.928942       1 server.go:483] "Version info" version="v1.31.1"
	I1011 20:59:01.929042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 20:59:01.934370       1 config.go:199] "Starting service config controller"
	I1011 20:59:01.934941       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 20:59:01.935028       1 config.go:105] "Starting endpoint slice config controller"
	I1011 20:59:01.935035       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 20:59:01.935510       1 config.go:328] "Starting node config controller"
	I1011 20:59:01.935518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 20:59:02.068717       1 shared_informer.go:320] Caches are synced for node config
	I1011 20:59:02.068821       1 shared_informer.go:320] Caches are synced for service config
	I1011 20:59:02.068881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] <==
	W1011 20:58:48.801939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 20:58:48.802589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.801989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:58:48.802694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.802036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 20:58:48.802784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.802495       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 20:58:48.802896       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 20:58:48.806235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:58:48.806312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 20:58:48.806476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806551       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 20:58:48.806614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:48.806732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 20:58:48.806875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 20:58:48.806993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:48.807027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.807077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 20:58:48.807124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1011 20:58:50.198675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:04:10 addons-627736 kubelet[1500]: E1011 21:04:10.588673    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680650588372537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:20 addons-627736 kubelet[1500]: E1011 21:04:20.591712    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680660591462840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:20 addons-627736 kubelet[1500]: E1011 21:04:20.591753    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680660591462840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:21 addons-627736 kubelet[1500]: I1011 21:04:21.252266    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:04:30 addons-627736 kubelet[1500]: E1011 21:04:30.594121    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680670593887056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:30 addons-627736 kubelet[1500]: E1011 21:04:30.594157    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680670593887056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:40 addons-627736 kubelet[1500]: E1011 21:04:40.596656    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680680596394898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:40 addons-627736 kubelet[1500]: E1011 21:04:40.596694    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680680596394898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:50 addons-627736 kubelet[1500]: E1011 21:04:50.599414    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680690599143553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:04:50 addons-627736 kubelet[1500]: E1011 21:04:50.599456    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680690599143553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:00 addons-627736 kubelet[1500]: E1011 21:05:00.602783    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680700602523414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:00 addons-627736 kubelet[1500]: E1011 21:05:00.602821    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680700602523414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:10 addons-627736 kubelet[1500]: E1011 21:05:10.605828    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680710605562591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:10 addons-627736 kubelet[1500]: E1011 21:05:10.605864    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680710605562591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:20 addons-627736 kubelet[1500]: E1011 21:05:20.608312    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680720608097434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:20 addons-627736 kubelet[1500]: E1011 21:05:20.608357    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680720608097434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:30 addons-627736 kubelet[1500]: E1011 21:05:30.611738    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680730611488372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:30 addons-627736 kubelet[1500]: E1011 21:05:30.611774    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680730611488372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:40 addons-627736 kubelet[1500]: E1011 21:05:40.614977    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680740614703259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:40 addons-627736 kubelet[1500]: E1011 21:05:40.615017    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680740614703259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:47 addons-627736 kubelet[1500]: I1011 21:05:47.252652    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:05:50 addons-627736 kubelet[1500]: E1011 21:05:50.617693    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680750617449020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:50 addons-627736 kubelet[1500]: E1011 21:05:50.617730    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680750617449020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597936,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:05:56 addons-627736 kubelet[1500]: I1011 21:05:56.972213    1500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=137.288300682 podStartE2EDuration="2m17.972193862s" podCreationTimestamp="2024-10-11 21:03:39 +0000 UTC" firstStartedPulling="2024-10-11 21:03:39.454144336 +0000 UTC m=+289.331233186" lastFinishedPulling="2024-10-11 21:03:40.138037516 +0000 UTC m=+290.015126366" observedRunningTime="2024-10-11 21:03:40.487544714 +0000 UTC m=+290.364633564" watchObservedRunningTime="2024-10-11 21:05:56.972193862 +0000 UTC m=+426.849282711"
	Oct 11 21:05:57 addons-627736 kubelet[1500]: I1011 21:05:57.099088    1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llz4l\" (UniqueName: \"kubernetes.io/projected/d62c91ba-1cc5-4723-ae3f-8516317b1c9c-kube-api-access-llz4l\") pod \"hello-world-app-55bf9c44b4-w257g\" (UID: \"d62c91ba-1cc5-4723-ae3f-8516317b1c9c\") " pod="default/hello-world-app-55bf9c44b4-w257g"
	
	
	==> storage-provisioner [381fa28b97303cf241449693f03a0ef78f01313a4347175b735a1dc510847596] <==
	I1011 20:59:42.845004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 20:59:42.939797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 20:59:42.939929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 20:59:42.976569       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 20:59:42.977397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c!
	I1011 20:59:42.978825       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d68dabc9-4d42-4ae2-86e5-41350b7a4f68", APIVersion:"v1", ResourceVersion:"920", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c became leader
	I1011 20:59:43.078057       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-627736 -n addons-627736
helpers_test.go:261: (dbg) Run:  kubectl --context addons-627736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-vrswx ingress-nginx-admission-patch-h4f2j
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-627736 describe pod ingress-nginx-admission-create-vrswx ingress-nginx-admission-patch-h4f2j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-627736 describe pod ingress-nginx-admission-create-vrswx ingress-nginx-admission-patch-h4f2j: exit status 1 (90.040623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vrswx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h4f2j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-627736 describe pod ingress-nginx-admission-create-vrswx ingress-nginx-admission-patch-h4f2j: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable ingress-dns --alsologtostderr -v=1: (1.425623972s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable ingress --alsologtostderr -v=1: (7.74594187s)
--- FAIL: TestAddons/parallel/Ingress (150.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (343.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.709329ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-96mlh" [6cae95da-c64a-42fb-a86c-a65aa4fa0447] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003812541s
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (96.156312ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 3m53.46455579s

                                                
                                                
** /stderr **
I1011 21:02:48.467955  282920 retry.go:31] will retry after 1.849005236s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (170.202158ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 3m55.485299102s

                                                
                                                
** /stderr **
I1011 21:02:50.488312  282920 retry.go:31] will retry after 2.824725282s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (94.754736ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 3m58.403983301s

                                                
                                                
** /stderr **
I1011 21:02:53.408147  282920 retry.go:31] will retry after 9.261786196s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (89.997357ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 4m7.757986065s

                                                
                                                
** /stderr **
I1011 21:03:02.761072  282920 retry.go:31] will retry after 13.024647427s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (93.030464ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 4m20.875572439s

                                                
                                                
** /stderr **
I1011 21:03:15.879191  282920 retry.go:31] will retry after 19.482364326s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (94.071579ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 4m40.452030881s

                                                
                                                
** /stderr **
I1011 21:03:35.456917  282920 retry.go:31] will retry after 12.029385457s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (90.842609ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 4m52.575289268s

                                                
                                                
** /stderr **
I1011 21:03:47.578263  282920 retry.go:31] will retry after 40.388741815s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (88.050718ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 5m33.05361561s

                                                
                                                
** /stderr **
I1011 21:04:28.056332  282920 retry.go:31] will retry after 1m11.735157532s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (86.680591ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 6m44.875815533s

                                                
                                                
** /stderr **
I1011 21:05:39.879390  282920 retry.go:31] will retry after 1m17.059809225s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (88.540101ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 8m2.029800078s

                                                
                                                
** /stderr **
I1011 21:06:57.033378  282920 retry.go:31] will retry after 36.774669134s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (83.886408ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 8m38.892410939s

                                                
                                                
** /stderr **
I1011 21:07:33.895513  282920 retry.go:31] will retry after 49.364586695s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-627736 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-627736 top pods -n kube-system: exit status 1 (85.670486ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rsfcm, age: 9m28.34410947s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-627736
helpers_test.go:235: (dbg) docker inspect addons-627736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85",
	        "Created": "2024-10-11T20:58:28.114718101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-11T20:58:28.238353139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/hosts",
	        "LogPath": "/var/lib/docker/containers/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85/9cbb45944b0e61bf0e0c42379a1ee89fcd585ef5241e36f9785005e95221ea85-json.log",
	        "Name": "/addons-627736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-627736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-627736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df-init/diff:/var/lib/docker/overlay2/71b5c158b789443874429d56b0e70559f5769113100aad8f0c3428abb77f0cef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c4e1db91c64bca9eea91b5f392f4eb6a456636ca1dcba2d63d4a3f43f563df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-627736",
	                "Source": "/var/lib/docker/volumes/addons-627736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-627736",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-627736",
	                "name.minikube.sigs.k8s.io": "addons-627736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6af8f0de907e2156042f51459dce13bdc8e944c37437e28fc613a89c8b8683e8",
	            "SandboxKey": "/var/run/docker/netns/6af8f0de907e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-627736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0344146e4445be92b5ffb06059262a7e24bfaf0cf3d149aa52e9622f8b2646a5",
	                    "EndpointID": "c547f1ac6145b2c2ddf3eeac89f1d8ea66e5187cb9598733847587a3a08da57d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-627736",
	                        "9cbb45944b0e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-627736 -n addons-627736
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 logs -n 25: (1.369003795s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-358295 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | download-docker-358295                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-358295                                                                   | download-docker-358295 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-919124   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | binary-mirror-919124                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45157                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919124                                                                     | binary-mirror-919124   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-627736                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-627736                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-627736 --wait=true                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 21:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:01 UTC | 11 Oct 24 21:01 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:01 UTC | 11 Oct 24 21:01 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-627736 ip                                                                            | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | -p addons-627736                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-627736 ssh cat                                                                       | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-1c41c8d6-e192-4aab-96f5-793834495bbd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-627736 addons                                                                        | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-627736 ssh curl -s                                                                   | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-627736 ip                                                                            | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:06 UTC | 11 Oct 24 21:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-627736 addons disable                                                                | addons-627736          | jenkins | v1.34.0 | 11 Oct 24 21:06 UTC | 11 Oct 24 21:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:03.880755  283686 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:03.880963  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:03.880977  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:03.880984  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:03.881378  283686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 20:58:03.881938  283686 out.go:352] Setting JSON to false
	I1011 20:58:03.883321  283686 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9627,"bootTime":1728670657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 20:58:03.883421  283686 start.go:139] virtualization:  
	I1011 20:58:03.885288  283686 out.go:177] * [addons-627736] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 20:58:03.887096  283686 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 20:58:03.887220  283686 notify.go:220] Checking for updates...
	I1011 20:58:03.889732  283686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:03.891136  283686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:58:03.892576  283686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 20:58:03.894143  283686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 20:58:03.895316  283686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 20:58:03.896729  283686 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:03.917109  283686 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:58:03.917241  283686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:03.980042  283686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:03.970919229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:03.980156  283686 docker.go:318] overlay module found
	I1011 20:58:03.982242  283686 out.go:177] * Using the docker driver based on user configuration
	I1011 20:58:03.983375  283686 start.go:297] selected driver: docker
	I1011 20:58:03.983391  283686 start.go:901] validating driver "docker" against <nil>
	I1011 20:58:03.983405  283686 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 20:58:03.984042  283686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:04.031998  283686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:04.022413748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:04.032222  283686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:04.032451  283686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:58:04.034037  283686 out.go:177] * Using Docker driver with root privileges
	I1011 20:58:04.035534  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:04.035598  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:04.035616  283686 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:04.035695  283686 start.go:340] cluster config:
	{Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:04.037360  283686 out.go:177] * Starting "addons-627736" primary control-plane node in "addons-627736" cluster
	I1011 20:58:04.038669  283686 cache.go:121] Beginning downloading kic base image for docker with crio
	I1011 20:58:04.040244  283686 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:58:04.041386  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:04.041433  283686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1011 20:58:04.041456  283686 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:04.041478  283686 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:58:04.041540  283686 preload.go:172] Found /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1011 20:58:04.041550  283686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 20:58:04.041901  283686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json ...
	I1011 20:58:04.041921  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json: {Name:mkb65e81161297914bc823260d8d954cd6c3cfff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:04.056055  283686 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:58:04.056170  283686 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:58:04.056201  283686 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1011 20:58:04.056209  283686 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1011 20:58:04.056217  283686 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1011 20:58:04.056223  283686 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1011 20:58:21.249752  283686 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1011 20:58:21.249794  283686 cache.go:194] Successfully downloaded all kic artifacts
	I1011 20:58:21.249839  283686 start.go:360] acquireMachinesLock for addons-627736: {Name:mkf3c6eb944bfebe208beb6538a765296fcc1455 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:21.249966  283686 start.go:364] duration metric: took 103.004µs to acquireMachinesLock for "addons-627736"
	I1011 20:58:21.250006  283686 start.go:93] Provisioning new machine with config: &{Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:21.250082  283686 start.go:125] createHost starting for "" (driver="docker")
	I1011 20:58:21.251907  283686 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1011 20:58:21.252146  283686 start.go:159] libmachine.API.Create for "addons-627736" (driver="docker")
	I1011 20:58:21.252179  283686 client.go:168] LocalClient.Create starting
	I1011 20:58:21.252282  283686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem
	I1011 20:58:21.623420  283686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem
	I1011 20:58:21.831240  283686 cli_runner.go:164] Run: docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 20:58:21.846209  283686 cli_runner.go:211] docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 20:58:21.846310  283686 network_create.go:284] running [docker network inspect addons-627736] to gather additional debugging logs...
	I1011 20:58:21.846333  283686 cli_runner.go:164] Run: docker network inspect addons-627736
	W1011 20:58:21.861211  283686 cli_runner.go:211] docker network inspect addons-627736 returned with exit code 1
	I1011 20:58:21.861244  283686 network_create.go:287] error running [docker network inspect addons-627736]: docker network inspect addons-627736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-627736 not found
	I1011 20:58:21.861258  283686 network_create.go:289] output of [docker network inspect addons-627736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-627736 not found
	
	** /stderr **
	I1011 20:58:21.861381  283686 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:21.876600  283686 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c75e40}
	I1011 20:58:21.876644  283686 network_create.go:124] attempt to create docker network addons-627736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1011 20:58:21.876707  283686 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-627736 addons-627736
	I1011 20:58:21.951616  283686 network_create.go:108] docker network addons-627736 192.168.49.0/24 created
	I1011 20:58:21.951650  283686 kic.go:121] calculated static IP "192.168.49.2" for the "addons-627736" container
	I1011 20:58:21.951736  283686 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 20:58:21.966398  283686 cli_runner.go:164] Run: docker volume create addons-627736 --label name.minikube.sigs.k8s.io=addons-627736 --label created_by.minikube.sigs.k8s.io=true
	I1011 20:58:21.981587  283686 oci.go:103] Successfully created a docker volume addons-627736
	I1011 20:58:21.981687  283686 cli_runner.go:164] Run: docker run --rm --name addons-627736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --entrypoint /usr/bin/test -v addons-627736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1011 20:58:23.992207  283686 cli_runner.go:217] Completed: docker run --rm --name addons-627736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --entrypoint /usr/bin/test -v addons-627736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.010477722s)
	I1011 20:58:23.992237  283686 oci.go:107] Successfully prepared a docker volume addons-627736
	I1011 20:58:23.992264  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:23.992283  283686 kic.go:194] Starting extracting preloaded images to volume ...
	I1011 20:58:23.992349  283686 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-627736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 20:58:28.046362  283686 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-627736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.053952635s)
	I1011 20:58:28.046399  283686 kic.go:203] duration metric: took 4.054111147s to extract preloaded images to volume ...
	W1011 20:58:28.046535  283686 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1011 20:58:28.046640  283686 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 20:58:28.100545  283686 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-627736 --name addons-627736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-627736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-627736 --network addons-627736 --ip 192.168.49.2 --volume addons-627736:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1011 20:58:28.401077  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Running}}
	I1011 20:58:28.424804  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:28.448134  283686 cli_runner.go:164] Run: docker exec addons-627736 stat /var/lib/dpkg/alternatives/iptables
	I1011 20:58:28.509241  283686 oci.go:144] the created container "addons-627736" has a running status.
	I1011 20:58:28.509332  283686 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa...
	I1011 20:58:29.289444  283686 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 20:58:29.325985  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:29.345696  283686 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 20:58:29.345722  283686 kic_runner.go:114] Args: [docker exec --privileged addons-627736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 20:58:29.429963  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:29.449794  283686 machine.go:93] provisionDockerMachine start ...
	I1011 20:58:29.449886  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.472739  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.473009  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.473019  283686 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 20:58:29.602251  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-627736
	
	I1011 20:58:29.602291  283686 ubuntu.go:169] provisioning hostname "addons-627736"
	I1011 20:58:29.602359  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.622090  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.622336  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.622353  283686 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-627736 && echo "addons-627736" | sudo tee /etc/hostname
	I1011 20:58:29.762000  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-627736
	
	I1011 20:58:29.762078  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:29.779890  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:29.780134  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:29.780156  283686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-627736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-627736/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-627736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 20:58:29.906682  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:58:29.906709  283686 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19749-277533/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-277533/.minikube}
	I1011 20:58:29.906741  283686 ubuntu.go:177] setting up certificates
	I1011 20:58:29.906753  283686 provision.go:84] configureAuth start
	I1011 20:58:29.906822  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:29.923206  283686 provision.go:143] copyHostCerts
	I1011 20:58:29.923294  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/ca.pem (1078 bytes)
	I1011 20:58:29.923429  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/cert.pem (1123 bytes)
	I1011 20:58:29.923489  283686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-277533/.minikube/key.pem (1679 bytes)
	I1011 20:58:29.923575  283686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem org=jenkins.addons-627736 san=[127.0.0.1 192.168.49.2 addons-627736 localhost minikube]
	I1011 20:58:30.229960  283686 provision.go:177] copyRemoteCerts
	I1011 20:58:30.230035  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 20:58:30.230086  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.246690  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.339941  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 20:58:30.364285  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 20:58:30.389324  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 20:58:30.413946  283686 provision.go:87] duration metric: took 507.17495ms to configureAuth
	I1011 20:58:30.414016  283686 ubuntu.go:193] setting minikube options for container-runtime
	I1011 20:58:30.414235  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:30.414357  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.430663  283686 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:30.431005  283686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1011 20:58:30.431032  283686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 20:58:30.658332  283686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 20:58:30.658406  283686 machine.go:96] duration metric: took 1.208590933s to provisionDockerMachine
	I1011 20:58:30.658432  283686 client.go:171] duration metric: took 9.406242299s to LocalClient.Create
	I1011 20:58:30.658480  283686 start.go:167] duration metric: took 9.406334211s to libmachine.API.Create "addons-627736"
	I1011 20:58:30.658506  283686 start.go:293] postStartSetup for "addons-627736" (driver="docker")
	I1011 20:58:30.658534  283686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 20:58:30.658686  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 20:58:30.658793  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.676041  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.767963  283686 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 20:58:30.771357  283686 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 20:58:30.771394  283686 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 20:58:30.771406  283686 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 20:58:30.771413  283686 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1011 20:58:30.771425  283686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-277533/.minikube/addons for local assets ...
	I1011 20:58:30.771501  283686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-277533/.minikube/files for local assets ...
	I1011 20:58:30.771529  283686 start.go:296] duration metric: took 113.001425ms for postStartSetup
	I1011 20:58:30.771866  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:30.787918  283686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/config.json ...
	I1011 20:58:30.788202  283686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 20:58:30.788255  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.804194  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.895929  283686 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 20:58:30.900658  283686 start.go:128] duration metric: took 9.650559191s to createHost
	I1011 20:58:30.900686  283686 start.go:83] releasing machines lock for "addons-627736", held for 9.65070774s
	I1011 20:58:30.900772  283686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-627736
	I1011 20:58:30.916556  283686 ssh_runner.go:195] Run: cat /version.json
	I1011 20:58:30.916586  283686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 20:58:30.916611  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.916664  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:30.936317  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:30.945253  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:31.156867  283686 ssh_runner.go:195] Run: systemctl --version
	I1011 20:58:31.161186  283686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 20:58:31.305800  283686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 20:58:31.310367  283686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:58:31.331473  283686 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1011 20:58:31.331562  283686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:58:31.363513  283686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1011 20:58:31.363536  283686 start.go:495] detecting cgroup driver to use...
	I1011 20:58:31.363600  283686 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1011 20:58:31.363666  283686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 20:58:31.379890  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 20:58:31.391045  283686 docker.go:217] disabling cri-docker service (if available) ...
	I1011 20:58:31.391120  283686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 20:58:31.405838  283686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 20:58:31.420382  283686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 20:58:31.511648  283686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 20:58:31.612435  283686 docker.go:233] disabling docker service ...
	I1011 20:58:31.612560  283686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 20:58:31.633654  283686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 20:58:31.646033  283686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 20:58:31.738736  283686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 20:58:31.833045  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 20:58:31.844054  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 20:58:31.860719  283686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 20:58:31.860797  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.871060  283686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 20:58:31.871142  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.881916  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.891430  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.901069  283686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 20:58:31.910112  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.919683  283686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.935721  283686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:58:31.945571  283686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 20:58:31.954207  283686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 20:58:31.962756  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:32.046338  283686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 20:58:32.158355  283686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 20:58:32.158451  283686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 20:58:32.162188  283686 start.go:563] Will wait 60s for crictl version
	I1011 20:58:32.162252  283686 ssh_runner.go:195] Run: which crictl
	I1011 20:58:32.165699  283686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 20:58:32.206326  283686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1011 20:58:32.206430  283686 ssh_runner.go:195] Run: crio --version
	I1011 20:58:32.243108  283686 ssh_runner.go:195] Run: crio --version
	I1011 20:58:32.283236  283686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1011 20:58:32.284599  283686 cli_runner.go:164] Run: docker network inspect addons-627736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:32.298937  283686 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1011 20:58:32.302470  283686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:32.313319  283686 kubeadm.go:883] updating cluster {Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 20:58:32.313444  283686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:32.313501  283686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:32.384346  283686 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:58:32.384370  283686 crio.go:433] Images already preloaded, skipping extraction
	I1011 20:58:32.384428  283686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:32.420745  283686 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:58:32.420771  283686 cache_images.go:84] Images are preloaded, skipping loading
	I1011 20:58:32.420779  283686 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1011 20:58:32.420868  283686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-627736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 20:58:32.420951  283686 ssh_runner.go:195] Run: crio config
	I1011 20:58:32.471284  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:32.471307  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:32.471319  283686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 20:58:32.471344  283686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-627736 NodeName:addons-627736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 20:58:32.471491  283686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-627736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 20:58:32.471563  283686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 20:58:32.480367  283686 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 20:58:32.480467  283686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 20:58:32.489215  283686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1011 20:58:32.507493  283686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 20:58:32.525609  283686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1011 20:58:32.543454  283686 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1011 20:58:32.546616  283686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:32.557675  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:32.642560  283686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:32.655924  283686 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736 for IP: 192.168.49.2
	I1011 20:58:32.655962  283686 certs.go:194] generating shared ca certs ...
	I1011 20:58:32.655978  283686 certs.go:226] acquiring lock for ca certs: {Name:mk54de457899109c47c9262eb70cea93f226fb7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:32.656695  283686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key
	I1011 20:58:33.120366  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt ...
	I1011 20:58:33.120397  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt: {Name:mk35e22facab7399875c11316c5e90e2812fb42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.120600  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key ...
	I1011 20:58:33.120616  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key: {Name:mk3f7e21b09c48a1e47b9012985e77cb50d8340c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.120731  283686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key
	I1011 20:58:33.772414  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt ...
	I1011 20:58:33.772446  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt: {Name:mk307405633594918b57a6584f1a74b6db576163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.772644  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key ...
	I1011 20:58:33.772657  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key: {Name:mka331898c20bab8f0b0cc436658a676570f7a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:33.773198  283686 certs.go:256] generating profile certs ...
	I1011 20:58:33.773287  283686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key
	I1011 20:58:33.773305  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt with IP's: []
	I1011 20:58:34.276641  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt ...
	I1011 20:58:34.276679  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: {Name:mk06faad2ead76e76fe953049fcc04a05cd3d303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.276875  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key ...
	I1011 20:58:34.276888  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.key: {Name:mkb296892a42f0228b7f0f5199473b64a3b763a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.276971  283686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca
	I1011 20:58:34.276992  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1011 20:58:34.975206  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca ...
	I1011 20:58:34.975239  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca: {Name:mk3f1afa3d6f256a3919fc5dd2e40459f4a45811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.975428  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca ...
	I1011 20:58:34.975443  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca: {Name:mkfe5252482039790baf8249a5dccdaf06a315d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.975922  283686 certs.go:381] copying /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt.b4fb07ca -> /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt
	I1011 20:58:34.976013  283686 certs.go:385] copying /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key.b4fb07ca -> /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key
	I1011 20:58:34.976067  283686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key
	I1011 20:58:34.976091  283686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt with IP's: []
	I1011 20:58:35.199679  283686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt ...
	I1011 20:58:35.199709  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt: {Name:mk98e1ffdd01236d0fe4f5851e298fed70995f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.200249  283686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key ...
	I1011 20:58:35.200266  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key: {Name:mk2bc2d21b88b1aacf8b6f48b230ed56733a4ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.200462  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 20:58:35.200514  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/ca.pem (1078 bytes)
	I1011 20:58:35.200544  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/cert.pem (1123 bytes)
	I1011 20:58:35.200573  283686 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-277533/.minikube/certs/key.pem (1679 bytes)
	I1011 20:58:35.201205  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 20:58:35.226764  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1011 20:58:35.252289  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 20:58:35.277250  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 20:58:35.301724  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1011 20:58:35.325107  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 20:58:35.348852  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 20:58:35.372585  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 20:58:35.395939  283686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 20:58:35.419420  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 20:58:35.437244  283686 ssh_runner.go:195] Run: openssl version
	I1011 20:58:35.442717  283686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 20:58:35.452447  283686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.456023  283686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.456137  283686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:35.463031  283686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 20:58:35.472473  283686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 20:58:35.475697  283686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 20:58:35.475749  283686 kubeadm.go:392] StartCluster: {Name:addons-627736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-627736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:35.475828  283686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 20:58:35.475885  283686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 20:58:35.521690  283686 cri.go:89] found id: ""
	I1011 20:58:35.521762  283686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 20:58:35.531125  283686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 20:58:35.539897  283686 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1011 20:58:35.540012  283686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 20:58:35.549071  283686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 20:58:35.549092  283686 kubeadm.go:157] found existing configuration files:
	
	I1011 20:58:35.549167  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 20:58:35.557942  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 20:58:35.558060  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 20:58:35.566689  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 20:58:35.575289  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 20:58:35.575354  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 20:58:35.584554  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 20:58:35.594387  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 20:58:35.594452  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 20:58:35.605806  283686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 20:58:35.615484  283686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 20:58:35.615549  283686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 20:58:35.624972  283686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 20:58:35.673707  283686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 20:58:35.673767  283686 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 20:58:35.693219  283686 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1011 20:58:35.693380  283686 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1011 20:58:35.693454  283686 kubeadm.go:310] OS: Linux
	I1011 20:58:35.693536  283686 kubeadm.go:310] CGROUPS_CPU: enabled
	I1011 20:58:35.693621  283686 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1011 20:58:35.693715  283686 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1011 20:58:35.693784  283686 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1011 20:58:35.693836  283686 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1011 20:58:35.693889  283686 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1011 20:58:35.693938  283686 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1011 20:58:35.693989  283686 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1011 20:58:35.694040  283686 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1011 20:58:35.754926  283686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 20:58:35.755054  283686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 20:58:35.755148  283686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 20:58:35.761410  283686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 20:58:35.764002  283686 out.go:235]   - Generating certificates and keys ...
	I1011 20:58:35.764108  283686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 20:58:35.764179  283686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 20:58:36.070218  283686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 20:58:36.922122  283686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 20:58:37.433341  283686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 20:58:37.653320  283686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 20:58:37.823798  283686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 20:58:37.824189  283686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-627736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:38.312503  283686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 20:58:38.312871  283686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-627736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:38.676067  283686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 20:58:38.842380  283686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 20:58:39.530467  283686 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 20:58:39.530926  283686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 20:58:40.148868  283686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 20:58:40.886377  283686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 20:58:41.295098  283686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 20:58:41.734649  283686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 20:58:42.364380  283686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 20:58:42.365488  283686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 20:58:42.368899  283686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 20:58:42.371042  283686 out.go:235]   - Booting up control plane ...
	I1011 20:58:42.371145  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 20:58:42.371221  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 20:58:42.387437  283686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 20:58:42.402593  283686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 20:58:42.408517  283686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 20:58:42.408575  283686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 20:58:42.493471  283686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 20:58:42.493591  283686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 20:58:43.995160  283686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501702844s
	I1011 20:58:43.995252  283686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 20:58:49.498441  283686 kubeadm.go:310] [api-check] The API server is healthy after 5.503335569s
	I1011 20:58:49.524063  283686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 20:58:49.538529  283686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 20:58:49.565479  283686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 20:58:49.565678  283686 kubeadm.go:310] [mark-control-plane] Marking the node addons-627736 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 20:58:49.577116  283686 kubeadm.go:310] [bootstrap-token] Using token: t2uypf.gy0wdc6zxqr3x4o7
	I1011 20:58:49.579785  283686 out.go:235]   - Configuring RBAC rules ...
	I1011 20:58:49.579915  283686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 20:58:49.584510  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 20:58:49.595062  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 20:58:49.599221  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 20:58:49.603318  283686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 20:58:49.607650  283686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 20:58:49.905742  283686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 20:58:50.374620  283686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 20:58:50.905373  283686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 20:58:50.906627  283686 kubeadm.go:310] 
	I1011 20:58:50.906702  283686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 20:58:50.906712  283686 kubeadm.go:310] 
	I1011 20:58:50.906788  283686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 20:58:50.906798  283686 kubeadm.go:310] 
	I1011 20:58:50.906824  283686 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 20:58:50.906903  283686 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 20:58:50.906958  283686 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 20:58:50.906971  283686 kubeadm.go:310] 
	I1011 20:58:50.907025  283686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 20:58:50.907033  283686 kubeadm.go:310] 
	I1011 20:58:50.907080  283686 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 20:58:50.907089  283686 kubeadm.go:310] 
	I1011 20:58:50.907141  283686 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 20:58:50.907218  283686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 20:58:50.907290  283686 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 20:58:50.907296  283686 kubeadm.go:310] 
	I1011 20:58:50.907380  283686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 20:58:50.907458  283686 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 20:58:50.907468  283686 kubeadm.go:310] 
	I1011 20:58:50.907550  283686 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t2uypf.gy0wdc6zxqr3x4o7 \
	I1011 20:58:50.907656  283686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3ad57be593f5ef8d7070016b8fd5a352b0a6c8ca865fb469493e29f8ed14cb \
	I1011 20:58:50.907680  283686 kubeadm.go:310] 	--control-plane 
	I1011 20:58:50.907688  283686 kubeadm.go:310] 
	I1011 20:58:50.907771  283686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 20:58:50.907780  283686 kubeadm.go:310] 
	I1011 20:58:50.907861  283686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t2uypf.gy0wdc6zxqr3x4o7 \
	I1011 20:58:50.907965  283686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3ad57be593f5ef8d7070016b8fd5a352b0a6c8ca865fb469493e29f8ed14cb 
	I1011 20:58:50.912416  283686 kubeadm.go:310] W1011 20:58:35.668720    1186 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:50.912725  283686 kubeadm.go:310] W1011 20:58:35.669516    1186 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:50.912941  283686 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1011 20:58:50.913048  283686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 20:58:50.913068  283686 cni.go:84] Creating CNI manager for ""
	I1011 20:58:50.913076  283686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:58:50.916049  283686 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 20:58:50.918761  283686 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 20:58:50.922580  283686 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 20:58:50.922641  283686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 20:58:50.942107  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 20:58:51.221260  283686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 20:58:51.221478  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-627736 minikube.k8s.io/updated_at=2024_10_11T20_58_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=addons-627736 minikube.k8s.io/primary=true
	I1011 20:58:51.221400  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:51.409948  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:51.410005  283686 ops.go:34] apiserver oom_adj: -16
	I1011 20:58:51.910236  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:52.410534  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:52.910770  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:53.410898  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:53.910062  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.410050  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.910861  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.410967  283686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.524820  283686 kubeadm.go:1113] duration metric: took 4.303477905s to wait for elevateKubeSystemPrivileges
	I1011 20:58:55.524856  283686 kubeadm.go:394] duration metric: took 20.049111377s to StartCluster
	I1011 20:58:55.524876  283686 settings.go:142] acquiring lock: {Name:mkd159174089de36fda894bd942ff4e38ae67976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:55.525008  283686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:58:55.525386  283686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-277533/kubeconfig: {Name:mk2d78d1d8080a1deb25ffe9f98ce4dff6104211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:55.525591  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 20:58:55.525605  283686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:55.525840  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:55.525870  283686 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1011 20:58:55.525946  283686 addons.go:69] Setting yakd=true in profile "addons-627736"
	I1011 20:58:55.525965  283686 addons.go:234] Setting addon yakd=true in "addons-627736"
	I1011 20:58:55.525988  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.526454  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.526784  283686 addons.go:69] Setting inspektor-gadget=true in profile "addons-627736"
	I1011 20:58:55.526803  283686 addons.go:234] Setting addon inspektor-gadget=true in "addons-627736"
	I1011 20:58:55.526827  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.527307  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.527846  283686 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-627736"
	I1011 20:58:55.527869  283686 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-627736"
	I1011 20:58:55.527894  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.528290  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.530487  283686 addons.go:69] Setting cloud-spanner=true in profile "addons-627736"
	I1011 20:58:55.530524  283686 addons.go:234] Setting addon cloud-spanner=true in "addons-627736"
	I1011 20:58:55.530564  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.532022  283686 addons.go:69] Setting metrics-server=true in profile "addons-627736"
	I1011 20:58:55.532087  283686 addons.go:234] Setting addon metrics-server=true in "addons-627736"
	I1011 20:58:55.532138  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.532643  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.533592  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.539366  283686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-627736"
	I1011 20:58:55.539413  283686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-627736"
	I1011 20:58:55.539450  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.539927  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541084  283686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-627736"
	I1011 20:58:55.541175  283686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-627736"
	I1011 20:58:55.580788  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.581304  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541324  283686 addons.go:69] Setting default-storageclass=true in profile "addons-627736"
	I1011 20:58:55.603728  283686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-627736"
	I1011 20:58:55.604193  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.604462  283686 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1011 20:58:55.541337  283686 addons.go:69] Setting gcp-auth=true in profile "addons-627736"
	I1011 20:58:55.618399  283686 mustload.go:65] Loading cluster: addons-627736
	I1011 20:58:55.618686  283686 config.go:182] Loaded profile config "addons-627736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:58:55.619099  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.541341  283686 addons.go:69] Setting ingress=true in profile "addons-627736"
	I1011 20:58:55.630399  283686 addons.go:234] Setting addon ingress=true in "addons-627736"
	I1011 20:58:55.630463  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.630972  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.632197  283686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1011 20:58:55.632257  283686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1011 20:58:55.632346  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.541345  283686 addons.go:69] Setting ingress-dns=true in profile "addons-627736"
	I1011 20:58:55.561549  283686 addons.go:69] Setting registry=true in profile "addons-627736"
	I1011 20:58:55.636286  283686 addons.go:234] Setting addon registry=true in "addons-627736"
	I1011 20:58:55.636356  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561564  283686 addons.go:69] Setting storage-provisioner=true in profile "addons-627736"
	I1011 20:58:55.637220  283686 addons.go:234] Setting addon storage-provisioner=true in "addons-627736"
	I1011 20:58:55.637247  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561573  283686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-627736"
	I1011 20:58:55.647937  283686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-627736"
	I1011 20:58:55.561577  283686 addons.go:69] Setting volcano=true in profile "addons-627736"
	I1011 20:58:55.648277  283686 addons.go:234] Setting addon volcano=true in "addons-627736"
	I1011 20:58:55.648308  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.561581  283686 addons.go:69] Setting volumesnapshots=true in profile "addons-627736"
	I1011 20:58:55.648411  283686 addons.go:234] Setting addon volumesnapshots=true in "addons-627736"
	I1011 20:58:55.648436  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.563901  283686 out.go:177] * Verifying Kubernetes components...
	I1011 20:58:55.648585  283686 addons.go:234] Setting addon ingress-dns=true in "addons-627736"
	I1011 20:58:55.648635  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.649091  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.662560  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.688730  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.694543  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.710283  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.727069  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.729985  283686 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1011 20:58:55.732621  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1011 20:58:55.732655  283686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1011 20:58:55.732725  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.746271  283686 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1011 20:58:55.746713  283686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:55.779014  283686 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1011 20:58:55.781665  283686 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:55.781688  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1011 20:58:55.781753  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.783964  283686 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1011 20:58:55.786510  283686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:55.786611  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1011 20:58:55.786733  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.793911  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 20:58:55.800444  283686 addons.go:234] Setting addon default-storageclass=true in "addons-627736"
	I1011 20:58:55.802612  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.803116  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.808163  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.809966  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 20:58:55.809985  283686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 20:58:55.810036  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.815491  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1011 20:58:55.818196  283686 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1011 20:58:55.826636  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:55.834666  283686 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1011 20:58:55.837203  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1011 20:58:55.837521  283686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:55.837541  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1011 20:58:55.837610  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.854671  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1011 20:58:55.866803  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1011 20:58:55.869718  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1011 20:58:55.879108  283686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:55.879135  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1011 20:58:55.879205  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.901399  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:55.901691  283686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-627736"
	I1011 20:58:55.901731  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:58:55.902180  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:58:55.911845  283686 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1011 20:58:55.914561  283686 out.go:177]   - Using image docker.io/registry:2.8.3
	I1011 20:58:55.917191  283686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1011 20:58:55.917265  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1011 20:58:55.917361  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.920229  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:55.930770  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1011 20:58:55.932628  283686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1011 20:58:55.933060  283686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:55.933104  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1011 20:58:55.933199  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.958384  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1011 20:58:55.961466  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1011 20:58:55.961610  283686 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 20:58:55.964459  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1011 20:58:55.968479  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1011 20:58:55.968506  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1011 20:58:55.968592  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:55.968884  283686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:55.968915  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 20:58:55.968972  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.005609  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.006537  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.008685  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.012733  283686 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1011 20:58:56.018921  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1011 20:58:56.018946  283686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1011 20:58:56.019026  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.053320  283686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:56.055560  283686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 20:58:56.055716  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.070370  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.076337  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.114977  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.128630  283686 out.go:177]   - Using image docker.io/busybox:stable
	I1011 20:58:56.131646  283686 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1011 20:58:56.134643  283686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:56.134669  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1011 20:58:56.134917  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:58:56.138500  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.145032  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.145895  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.155678  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.181522  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.189512  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.196431  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:58:56.368404  283686 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:56.368428  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1011 20:58:56.472320  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:56.518938  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:56.531491  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1011 20:58:56.531516  283686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1011 20:58:56.563021  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:56.605629  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:56.608502  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1011 20:58:56.608529  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1011 20:58:56.625478  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1011 20:58:56.625504  283686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1011 20:58:56.649181  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:56.673072  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:56.725528  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1011 20:58:56.725555  283686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1011 20:58:56.728468  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:56.739779  283686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:56.768242  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1011 20:58:56.768270  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1011 20:58:56.783014  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 20:58:56.783047  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1011 20:58:56.831898  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:56.833511  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:56.841309  283686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1011 20:58:56.841335  283686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1011 20:58:56.846517  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1011 20:58:56.846545  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1011 20:58:56.856959  283686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:58:56.856980  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1011 20:58:56.997143  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1011 20:58:56.997175  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1011 20:58:57.024146  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1011 20:58:57.024175  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1011 20:58:57.028404  283686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:58:57.028438  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1011 20:58:57.029189  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:58:57.045531  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 20:58:57.045557  283686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 20:58:57.167999  283686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1011 20:58:57.168037  283686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1011 20:58:57.193463  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:58:57.259558  283686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:58:57.259624  283686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 20:58:57.270481  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1011 20:58:57.270555  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1011 20:58:57.382683  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1011 20:58:57.382760  283686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1011 20:58:57.489884  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:58:57.493222  283686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1011 20:58:57.493249  283686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1011 20:58:57.593348  283686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:58:57.593374  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1011 20:58:57.629812  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1011 20:58:57.629887  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1011 20:58:57.721382  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1011 20:58:57.721458  283686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1011 20:58:57.730696  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:58:57.814442  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1011 20:58:57.814517  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1011 20:58:57.916496  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1011 20:58:57.916578  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1011 20:58:58.041706  283686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:58:58.041783  283686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1011 20:58:58.078742  283686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.284786574s)
	I1011 20:58:58.078822  283686 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1011 20:58:58.245585  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:58:59.490837  283686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-627736" context rescaled to 1 replicas
	I1011 20:59:01.917891  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.445530604s)
	I1011 20:59:01.917957  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.398994318s)
	I1011 20:59:01.917984  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.354939823s)
	I1011 20:59:01.918031  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.312380424s)
	I1011 20:59:02.669194  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.019973991s)
	I1011 20:59:02.669230  283686 addons.go:475] Verifying addon ingress=true in "addons-627736"
	I1011 20:59:02.669426  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.996326459s)
	I1011 20:59:02.669490  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.941000968s)
	I1011 20:59:02.669666  283686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.929864181s)
	I1011 20:59:02.670563  283686 node_ready.go:35] waiting up to 6m0s for node "addons-627736" to be "Ready" ...
	I1011 20:59:02.670755  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.838831772s)
	I1011 20:59:02.670794  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.83725768s)
	I1011 20:59:02.670931  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.641710107s)
	I1011 20:59:02.671218  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.477716719s)
	I1011 20:59:02.671240  283686 addons.go:475] Verifying addon registry=true in "addons-627736"
	I1011 20:59:02.671349  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.181398005s)
	I1011 20:59:02.671836  283686 addons.go:475] Verifying addon metrics-server=true in "addons-627736"
	I1011 20:59:02.671431  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.940660017s)
	W1011 20:59:02.671877  283686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:02.671908  283686 retry.go:31] will retry after 280.043704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:02.673387  283686 out.go:177] * Verifying ingress addon...
	I1011 20:59:02.675402  283686 out.go:177] * Verifying registry addon...
	I1011 20:59:02.675440  283686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-627736 service yakd-dashboard -n yakd-dashboard
	
	I1011 20:59:02.678009  283686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1011 20:59:02.680837  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1011 20:59:02.696586  283686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1011 20:59:02.696622  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:02.697137  283686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:02.697156  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1011 20:59:02.712906  283686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1011 20:59:02.952373  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:02.982563  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.736879831s)
	I1011 20:59:02.982641  283686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-627736"
	I1011 20:59:02.985523  283686 out.go:177] * Verifying csi-hostpath-driver addon...
	I1011 20:59:02.988978  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1011 20:59:03.035520  283686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:03.035601  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:03.183194  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:03.186324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:03.493646  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:03.682822  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:03.684905  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:03.738021  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1011 20:59:03.738117  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:59:03.757442  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:59:03.861745  283686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1011 20:59:03.880123  283686 addons.go:234] Setting addon gcp-auth=true in "addons-627736"
	I1011 20:59:03.880174  283686 host.go:66] Checking if "addons-627736" exists ...
	I1011 20:59:03.880645  283686 cli_runner.go:164] Run: docker container inspect addons-627736 --format={{.State.Status}}
	I1011 20:59:03.896454  283686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1011 20:59:03.896511  283686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-627736
	I1011 20:59:03.912993  283686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/addons-627736/id_rsa Username:docker}
	I1011 20:59:03.992803  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:04.181892  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:04.184489  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:04.492656  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:04.673923  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:04.682577  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:04.685290  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:04.992639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:05.181990  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:05.183816  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:05.493373  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:05.635806  283686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.683361623s)
	I1011 20:59:05.635908  283686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.739429592s)
	I1011 20:59:05.639136  283686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:05.641789  283686 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1011 20:59:05.644603  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1011 20:59:05.644626  283686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1011 20:59:05.669566  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1011 20:59:05.669640  283686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1011 20:59:05.686944  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:05.688228  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:05.689247  283686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:05.689294  283686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1011 20:59:05.708250  283686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:05.993489  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:06.198255  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:06.199287  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:06.231718  283686 addons.go:475] Verifying addon gcp-auth=true in "addons-627736"
	I1011 20:59:06.234518  283686 out.go:177] * Verifying gcp-auth addon...
	I1011 20:59:06.238068  283686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1011 20:59:06.292136  283686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 20:59:06.292163  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:06.493163  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:06.674337  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:06.683778  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:06.684688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:06.741535  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:06.993133  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:07.182285  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:07.183661  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:07.241859  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:07.493278  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:07.681967  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:07.683503  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:07.741108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:07.992617  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:08.182085  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:08.183519  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:08.241707  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:08.493072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:08.682542  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:08.684149  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:08.741313  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:08.992849  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:09.174731  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:09.181814  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:09.184266  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.241992  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:09.493320  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:09.681883  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:09.684325  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.742874  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:09.992743  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.182621  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.185275  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.241946  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:10.493250  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.681964  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.683427  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.741160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:10.992612  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.182647  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.184012  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.241156  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:11.493024  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.673671  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:11.681972  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.683207  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.741597  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:11.993014  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.183057  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.184153  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.241733  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:12.493692  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.681886  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.684534  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.741454  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:12.992955  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:13.181748  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.184419  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.241592  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:13.493095  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:13.674573  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:13.682496  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.684939  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.742119  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:13.992875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.181857  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.184329  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.241446  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:14.493098  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.682063  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.684638  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.741943  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:14.993295  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.181549  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.184251  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.241662  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:15.493311  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.682723  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.684343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.741516  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:15.993459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.174022  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:16.182445  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.183645  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.241552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:16.492751  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.681750  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.684118  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.741639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:16.992746  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.183629  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.184277  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.241973  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:17.492734  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.682865  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.684402  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.741097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:17.995021  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.182167  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.183861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.241904  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:18.493035  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.674273  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:18.681930  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.683425  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.741426  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:18.992961  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.181947  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.184457  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.241288  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:19.492705  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.682304  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.683740  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.741937  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:19.993325  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.182508  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.184363  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.241464  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:20.493136  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.682419  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.684964  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.741720  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:20.993256  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.174475  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:21.183705  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:21.185111  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.242124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:21.493585  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.683271  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:21.684764  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.741651  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:21.992919  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.181755  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.183451  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.241282  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:22.492890  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.682246  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.684834  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.746715  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:22.992549  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.176323  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:23.182514  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.184304  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.242069  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:23.493352  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.682364  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.684730  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.741463  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:23.992965  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.181820  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.183750  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.241253  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:24.492731  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.682281  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.683693  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.741944  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:24.993076  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.182587  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:25.183739  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.241982  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:25.493625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.674600  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:25.681772  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:25.683562  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.741810  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:25.993387  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.182590  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.184710  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.241857  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:26.492826  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.682276  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.683688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.741437  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:26.993180  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.182312  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.183903  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.241894  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:27.493024  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.683256  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.686348  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.741966  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:27.993513  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.173949  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:28.182245  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.183965  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.242001  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:28.492227  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.682531  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.684476  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.741079  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:28.992564  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:29.182119  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.184811  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.241714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:29.493700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:29.681614  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.684137  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.742016  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:29.993085  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.175180  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:30.182373  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:30.184220  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.242043  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:30.492552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.682016  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:30.683560  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.741923  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:30.992979  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.181750  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.184397  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.241582  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:31.492674  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.682147  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.684493  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.741675  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:31.993074  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.182356  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.183682  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.241422  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:32.492651  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.674565  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:32.681835  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.684365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.741649  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:32.993079  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.181884  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.191418  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.241540  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:33.492434  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.682370  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.683679  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.741485  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:33.993563  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.183261  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.184460  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:34.241368  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:34.493179  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.681640  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.684029  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:34.741217  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:34.992492  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.174402  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:35.182410  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:35.184098  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:35.241247  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:35.492586  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.682516  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:35.684853  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:35.741420  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:35.993331  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.182161  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.183974  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:36.241195  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:36.492867  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.681849  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.684459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:36.741676  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:36.992977  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.174448  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:37.181637  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.184374  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:37.241170  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:37.493287  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.682599  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.683939  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:37.742151  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:37.992345  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.182246  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.185055  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:38.241412  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:38.493510  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.682169  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.684727  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:38.741870  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:38.993291  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.174764  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:39.182453  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.184999  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:39.241070  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:39.492924  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.682342  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.683714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:39.741552  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:39.993185  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.184140  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.184234  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.241666  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:40.492888  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.682116  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.683573  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.742044  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:40.993289  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.182451  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.184172  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.241855  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:41.493060  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.673640  283686 node_ready.go:53] node "addons-627736" has status "Ready":"False"
	I1011 20:59:41.681716  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.684360  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.740958  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:41.992711  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.179314  283686 node_ready.go:49] node "addons-627736" has status "Ready":"True"
	I1011 20:59:42.179399  283686 node_ready.go:38] duration metric: took 39.508803304s for node "addons-627736" to be "Ready" ...
	I1011 20:59:42.179426  283686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:42.193131  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.198310  283686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:42.201659  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.249976  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:42.596312  283686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:42.596347  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.719651  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.744392  283686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:42.744480  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.790435  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.019557  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.186990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.189022  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.287030  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.499334  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.686725  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.690062  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.709257  283686 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.709285  283686 pod_ready.go:82] duration metric: took 1.510893235s for pod "coredns-7c65d6cfc9-rsfcm" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.709304  283686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.720973  283686 pod_ready.go:93] pod "etcd-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.720999  283686 pod_ready.go:82] duration metric: took 11.687165ms for pod "etcd-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.721014  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.726742  283686 pod_ready.go:93] pod "kube-apiserver-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.726766  283686 pod_ready.go:82] duration metric: took 5.744181ms for pod "kube-apiserver-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.726777  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.748268  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.749213  283686 pod_ready.go:93] pod "kube-controller-manager-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.749236  283686 pod_ready.go:82] duration metric: took 22.451255ms for pod "kube-controller-manager-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.749251  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p49c6" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.760664  283686 pod_ready.go:93] pod "kube-proxy-p49c6" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.760690  283686 pod_ready.go:82] duration metric: took 11.430688ms for pod "kube-proxy-p49c6" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.760703  283686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.993833  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.104050  283686 pod_ready.go:93] pod "kube-scheduler-addons-627736" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:44.104118  283686 pod_ready.go:82] duration metric: took 343.406965ms for pod "kube-scheduler-addons-627736" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:44.104147  283686 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:44.183102  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.186555  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.241576  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.493805  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.683938  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.685898  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.741968  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.993814  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.183884  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.189316  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.242565  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.493753  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.682701  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.686227  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.741523  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.994322  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.110044  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:46.182616  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.185629  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.242120  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.497907  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.683818  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.687395  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.744688  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.994553  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.183214  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.185159  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.242239  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.495631  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.685777  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.687418  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.742781  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.994449  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.112132  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:48.192670  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.193948  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.242987  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.494555  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.683889  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.686312  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.742375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.996072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.183133  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.186476  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.241990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.494583  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.682811  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.685060  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.742058  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.994247  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.182747  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.185182  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.241730  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.493658  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.610921  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:50.683159  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.685700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.742312  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.994160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.190942  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.199356  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.242029  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.494696  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.683743  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.685318  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.741837  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.993888  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.183778  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.185430  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.241527  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.494951  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.611283  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:52.682430  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.684406  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.741700  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.994135  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.183200  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.186432  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.241732  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.493625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.684232  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.685973  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.742356  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.994723  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.183115  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.184937  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.242372  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.494488  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.683980  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.686343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.742097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.994181  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.111646  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:55.183128  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.186859  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.242160  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.494393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.682495  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.683985  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.742424  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.994026  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.182440  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.185625  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.242459  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.495930  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.694338  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.697540  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.742185  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.995236  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.117222  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:57.186268  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.191246  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.242488  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.497470  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.686445  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.689875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.742776  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.994343  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.189584  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.191223  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.245179  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.496072  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.682735  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.685405  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.742370  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.995920  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.183399  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.185421  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.241922  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.494451  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.610657  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:59.683093  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.685441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.741726  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.993959  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.193795  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.200588  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.253340  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.499644  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.687591  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.689931  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.745097  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.995432  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.186258  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.186667  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.242372  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.496051  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.612733  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:01.690768  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.692834  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.742289  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.995673  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.186089  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.188213  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.242896  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.495915  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.684190  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.692008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.741590  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.996039  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.196082  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.197968  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.241881  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.507365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.615334  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:03.683208  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.685729  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.742108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.993770  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.184732  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.185089  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.242393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.493713  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.685917  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.688861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.742998  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.995200  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.184958  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.189363  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.242522  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.494624  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.683544  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.686218  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.741836  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.994541  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.114812  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:06.183082  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.184605  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.241925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.493597  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.692572  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.693443  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.785954  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.993595  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.183286  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.185093  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.241714  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.494154  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.682752  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.684972  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.742153  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.993857  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.182530  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.184733  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.241927  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.493917  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.624802  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:08.694364  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.695104  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.742802  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.995515  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.196012  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.201441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.243086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.495892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.718784  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.720547  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.799122  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.995473  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.184807  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.187801  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.242580  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.494931  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.684624  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.687249  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.747744  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.995669  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.114546  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:11.182751  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.186321  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.245362  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.494467  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.683316  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.688864  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.742621  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.998253  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.182795  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.186205  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.241894  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.494235  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.683620  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.685612  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.742027  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.994984  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.183372  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.185346  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.241619  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.495083  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.614045  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:13.683339  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.687369  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.744324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.997347  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.185094  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.185496  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.241847  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.497025  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.683090  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.686614  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.742416  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.994358  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.186258  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.187922  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.242542  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.497130  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.685136  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.688008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.742886  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.994861  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.113630  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:16.186624  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.187492  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.242192  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.495067  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.685043  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.687654  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.742454  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.994440  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.185828  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.188238  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.241798  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.495215  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.682774  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.684698  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.741767  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.994817  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.183625  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.185371  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.241982  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.494791  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.610621  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:18.683173  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.685074  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.741300  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.994020  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.183182  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.185005  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.241399  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.494572  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.685907  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.689066  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.741805  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.993862  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.184033  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.186554  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.242159  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.495473  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.610667  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:20.683864  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.685604  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.742184  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.994375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.182916  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.184822  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.242287  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.493767  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.682572  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.684572  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.743892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.993704  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.184216  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.185387  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.241994  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.494082  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.613037  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:22.684065  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.686350  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.783621  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.000875  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.182730  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.184996  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.241191  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.495196  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.682923  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.685133  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.743663  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.993614  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.183979  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.185519  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.243000  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.493990  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.683195  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.686268  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.742093  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.995048  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.110628  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:25.184752  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.185739  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.242025  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.495015  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.682812  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.685241  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.742124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.994902  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.182333  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.184623  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.241914  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.493365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.682951  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.684589  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.741708  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.995401  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.111069  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:27.183478  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.186340  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.241434  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.495892  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.684907  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.686037  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.745184  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.997566  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.185650  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.191867  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.242293  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.494273  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.683733  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.685365  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.741970  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.995236  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.112344  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:29.183377  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.185146  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:29.242008  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.494207  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.682967  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.686063  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:29.741539  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.994046  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.184213  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.186672  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:30.242441  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.494468  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.683868  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.688984  283686 kapi.go:107] duration metric: took 1m28.008145424s to wait for kubernetes.io/minikube-addons=registry ...
	I1011 21:00:30.741386  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.995591  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.183001  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.242329  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.496152  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.613711  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:31.683873  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.743124  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.996789  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.184118  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.242547  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.494440  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.683833  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.742546  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.997139  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.184720  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.242611  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.495118  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.682615  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.742743  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.994658  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.111273  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:34.183799  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.247855  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.494352  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.684256  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.741945  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.994905  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.183388  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.241326  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.495231  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.683562  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.742086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.994695  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.115471  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:36.184554  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.241872  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.493976  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.682557  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.742188  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.993881  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.184475  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.241829  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.494635  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.683549  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.742141  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.995062  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.183151  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.241804  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.494086  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.612073  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:38.684580  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.745810  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.994664  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.183742  283686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.282752  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.494687  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.683585  283686 kapi.go:107] duration metric: took 1m37.005572759s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1011 21:00:39.742538  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.995708  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.242648  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.495807  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.612427  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:40.750228  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.994549  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.298108  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.494448  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.741864  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.994483  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.241532  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.494675  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.742721  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.995753  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.111490  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:43.248237  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.495930  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.741833  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.994355  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.241138  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.494599  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.741504  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.995053  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.113671  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:45.242744  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.493636  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.741925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.993727  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.241393  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.495043  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.742401  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.995324  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.242042  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.494424  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.611133  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:47.741713  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.996051  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.241734  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.493977  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.742221  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.994375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.241791  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.494375  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.742366  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.996759  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.110838  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:50.246769  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.494639  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.741877  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.994681  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.244654  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.494290  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.752438  283686 kapi.go:107] duration metric: took 1m45.514367168s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1011 21:00:51.755643  283686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-627736 cluster.
	I1011 21:00:51.759261  283686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1011 21:00:51.760775  283686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1011 21:00:51.994653  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.501765  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.622009  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:52.994761  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.494925  283686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.994877  283686 kapi.go:107] duration metric: took 1m51.005901127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1011 21:00:53.996214  283686 out.go:177] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1011 21:00:53.997472  283686 addons.go:510] duration metric: took 1m58.471592636s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1011 21:00:55.110733  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:57.610448  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:00:59.610618  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:01.611417  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:03.615797  283686 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"False"
	I1011 21:01:05.610009  283686 pod_ready.go:93] pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace has status "Ready":"True"
	I1011 21:01:05.610036  283686 pod_ready.go:82] duration metric: took 1m21.505867715s for pod "metrics-server-84c5f94fbc-96mlh" in "kube-system" namespace to be "Ready" ...
	I1011 21:01:05.610053  283686 pod_ready.go:39] duration metric: took 1m23.430586026s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:01:05.610069  283686 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:01:05.610104  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:05.610169  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:05.670588  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:05.670627  283686 cri.go:89] found id: ""
	I1011 21:01:05.670636  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:05.670702  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.674259  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:05.674337  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:05.716284  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:05.716307  283686 cri.go:89] found id: ""
	I1011 21:01:05.716315  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:05.716372  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.719810  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:05.719937  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:05.773858  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:05.773882  283686 cri.go:89] found id: ""
	I1011 21:01:05.773891  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:05.773961  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.777260  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:05.777398  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:05.816412  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:05.816491  283686 cri.go:89] found id: ""
	I1011 21:01:05.816515  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:05.816610  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.820235  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:05.820306  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:05.860720  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:05.860743  283686 cri.go:89] found id: ""
	I1011 21:01:05.860752  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:05.860809  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.864766  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:05.864873  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:05.905768  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:05.905801  283686 cri.go:89] found id: ""
	I1011 21:01:05.905811  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:05.905877  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.909382  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:05.909464  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:05.952306  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:05.952329  283686 cri.go:89] found id: ""
	I1011 21:01:05.952337  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:05.952433  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:05.955834  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:05.955862  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:06.001543  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:06.001575  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:06.054417  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:06.054448  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:06.192875  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:06.192916  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:06.261533  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:06.261575  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:06.313314  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.313591  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:06.313771  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.313990  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:06.375037  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:06.375070  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:06.393698  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:06.393732  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:06.574353  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:06.574384  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:06.635302  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:06.635332  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:06.689059  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:06.689093  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:06.731413  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:06.731442  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:06.772473  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:06.772503  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:06.864616  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:06.864650  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:06.864735  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:06.864747  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.864761  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:06.864781  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:06.864795  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:06.864813  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:06.864822  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:16.865634  283686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:01:16.879583  283686 api_server.go:72] duration metric: took 2m21.353946238s to wait for apiserver process to appear ...
	I1011 21:01:16.879609  283686 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:01:16.879645  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:16.879702  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:16.919894  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:16.919915  283686 cri.go:89] found id: ""
	I1011 21:01:16.919925  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:16.919985  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:16.923744  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:16.923827  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:16.963028  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:16.963055  283686 cri.go:89] found id: ""
	I1011 21:01:16.963065  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:16.963123  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:16.966723  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:16.966796  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:17.015407  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:17.015432  283686 cri.go:89] found id: ""
	I1011 21:01:17.015451  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:17.015513  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.018891  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:17.018963  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:17.060477  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:17.060497  283686 cri.go:89] found id: ""
	I1011 21:01:17.060506  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:17.060562  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.064163  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:17.064243  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:17.103509  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:17.103533  283686 cri.go:89] found id: ""
	I1011 21:01:17.103543  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:17.103606  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.107091  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:17.107159  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:17.146822  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:17.146884  283686 cri.go:89] found id: ""
	I1011 21:01:17.146893  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:17.146958  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.150695  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:17.150775  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:17.196771  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:17.196795  283686 cri.go:89] found id: ""
	I1011 21:01:17.196804  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:17.196857  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:17.200447  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:17.200484  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:17.335296  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:17.335328  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:17.404435  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:17.404470  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:17.445029  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:17.445060  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:17.493920  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:17.493951  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:17.535275  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:17.535304  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:17.633273  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:17.633309  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:17.734290  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:17.734331  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:17.791643  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:17.791917  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:17.792100  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:17.792319  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:17.854309  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:17.854352  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:17.911517  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:17.911548  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:17.955470  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:17.955501  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:18.019275  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:18.019359  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:18.036766  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:18.036801  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:18.036901  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:18.036919  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:18.036956  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:18.036983  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:18.036993  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:18.037003  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:18.037024  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:28.038301  283686 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1011 21:01:28.046207  283686 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1011 21:01:28.047432  283686 api_server.go:141] control plane version: v1.31.1
	I1011 21:01:28.047459  283686 api_server.go:131] duration metric: took 11.167842186s to wait for apiserver health ...
	I1011 21:01:28.047468  283686 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:01:28.047490  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:01:28.047555  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:01:28.101765  283686 cri.go:89] found id: "98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:28.101789  283686 cri.go:89] found id: ""
	I1011 21:01:28.101798  283686 logs.go:282] 1 containers: [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78]
	I1011 21:01:28.101857  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.105239  283686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 21:01:28.105317  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:01:28.149350  283686 cri.go:89] found id: "b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:28.149373  283686 cri.go:89] found id: ""
	I1011 21:01:28.149382  283686 logs.go:282] 1 containers: [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b]
	I1011 21:01:28.149441  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.153152  283686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 21:01:28.153227  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:01:28.192693  283686 cri.go:89] found id: "4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:28.192717  283686 cri.go:89] found id: ""
	I1011 21:01:28.192725  283686 logs.go:282] 1 containers: [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab]
	I1011 21:01:28.192785  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.196264  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:01:28.196332  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:01:28.234414  283686 cri.go:89] found id: "52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:28.234434  283686 cri.go:89] found id: ""
	I1011 21:01:28.234443  283686 logs.go:282] 1 containers: [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015]
	I1011 21:01:28.234497  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.237874  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:01:28.237995  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:01:28.277109  283686 cri.go:89] found id: "b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:28.277134  283686 cri.go:89] found id: ""
	I1011 21:01:28.277143  283686 logs.go:282] 1 containers: [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e]
	I1011 21:01:28.277199  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.280762  283686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:01:28.280850  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:01:28.321646  283686 cri.go:89] found id: "44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:28.321671  283686 cri.go:89] found id: ""
	I1011 21:01:28.321680  283686 logs.go:282] 1 containers: [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432]
	I1011 21:01:28.321742  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.325266  283686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 21:01:28.325364  283686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:01:28.364739  283686 cri.go:89] found id: "aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:28.364763  283686 cri.go:89] found id: ""
	I1011 21:01:28.364772  283686 logs.go:282] 1 containers: [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1]
	I1011 21:01:28.364831  283686 ssh_runner.go:195] Run: which crictl
	I1011 21:01:28.368342  283686 logs.go:123] Gathering logs for kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] ...
	I1011 21:01:28.368368  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1"
	I1011 21:01:28.410471  283686 logs.go:123] Gathering logs for kubelet ...
	I1011 21:01:28.410498  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:01:28.462321  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:28.462562  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:28.462740  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:28.462965  283686 logs.go:138] Found kubelet problem: Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:28.525427  283686 logs.go:123] Gathering logs for dmesg ...
	I1011 21:01:28.525457  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:01:28.543659  283686 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:01:28.543693  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:01:28.700375  283686 logs.go:123] Gathering logs for etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] ...
	I1011 21:01:28.700591  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b"
	I1011 21:01:28.789188  283686 logs.go:123] Gathering logs for kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] ...
	I1011 21:01:28.789222  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432"
	I1011 21:01:28.861006  283686 logs.go:123] Gathering logs for container status ...
	I1011 21:01:28.861044  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:01:28.917375  283686 logs.go:123] Gathering logs for kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] ...
	I1011 21:01:28.917411  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78"
	I1011 21:01:28.972973  283686 logs.go:123] Gathering logs for coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] ...
	I1011 21:01:28.973008  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab"
	I1011 21:01:29.020737  283686 logs.go:123] Gathering logs for kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] ...
	I1011 21:01:29.020769  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015"
	I1011 21:01:29.067399  283686 logs.go:123] Gathering logs for kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] ...
	I1011 21:01:29.067433  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e"
	I1011 21:01:29.109861  283686 logs.go:123] Gathering logs for CRI-O ...
	I1011 21:01:29.109889  283686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 21:01:29.200877  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:29.200911  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:01:29.200992  283686 out.go:270] X Problems detected in kubelet:
	W1011 21:01:29.201006  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376858    1500 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:29.201022  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.376915    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	W1011 21:01:29.201044  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: W1011 20:58:55.376988    1500 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-627736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-627736' and this object
	W1011 21:01:29.201057  283686 out.go:270]   Oct 11 20:58:55 addons-627736 kubelet[1500]: E1011 20:58:55.377003    1500 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-627736\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-627736' and this object" logger="UnhandledError"
	I1011 21:01:29.201065  283686 out.go:358] Setting ErrFile to fd 2...
	I1011 21:01:29.201084  283686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:01:39.213529  283686 system_pods.go:59] 18 kube-system pods found
	I1011 21:01:39.213570  283686 system_pods.go:61] "coredns-7c65d6cfc9-rsfcm" [996e1047-8f10-483c-b830-62ec9c4b730f] Running
	I1011 21:01:39.213577  283686 system_pods.go:61] "csi-hostpath-attacher-0" [87ab25f5-e4b7-4fca-9c15-c48d19c12b6b] Running
	I1011 21:01:39.213582  283686 system_pods.go:61] "csi-hostpath-resizer-0" [a69f5a66-0e63-4da4-a58d-9b45ff6cea64] Running
	I1011 21:01:39.213612  283686 system_pods.go:61] "csi-hostpathplugin-62fx7" [86c47021-238a-4871-ac72-f78324ed2dd6] Running
	I1011 21:01:39.213625  283686 system_pods.go:61] "etcd-addons-627736" [827639aa-3bdc-40ac-aa45-a6fea950ca93] Running
	I1011 21:01:39.213631  283686 system_pods.go:61] "kindnet-dl4r6" [062ac268-a384-40a2-a21f-958b9a3a66b1] Running
	I1011 21:01:39.213635  283686 system_pods.go:61] "kube-apiserver-addons-627736" [995afcf6-521b-49ba-a610-46c76edc3841] Running
	I1011 21:01:39.213644  283686 system_pods.go:61] "kube-controller-manager-addons-627736" [8a9b26d8-92ce-4ff1-930e-b4b9d34f5b9c] Running
	I1011 21:01:39.213648  283686 system_pods.go:61] "kube-ingress-dns-minikube" [9ee3781e-ba5e-4b03-a5f5-cc32cc20407b] Running
	I1011 21:01:39.213651  283686 system_pods.go:61] "kube-proxy-p49c6" [995ebad4-48a5-48d5-a2aa-aef4671e5f5f] Running
	I1011 21:01:39.213655  283686 system_pods.go:61] "kube-scheduler-addons-627736" [2878cfca-4eed-4105-8c19-850954387751] Running
	I1011 21:01:39.213659  283686 system_pods.go:61] "metrics-server-84c5f94fbc-96mlh" [6cae95da-c64a-42fb-a86c-a65aa4fa0447] Running
	I1011 21:01:39.213666  283686 system_pods.go:61] "nvidia-device-plugin-daemonset-p9nsd" [41af943b-e0c9-4974-aa28-297cadfc3d28] Running
	I1011 21:01:39.213670  283686 system_pods.go:61] "registry-66c9cd494c-p6l9v" [0674412c-ee63-4347-b013-fcbb85bd1f6a] Running
	I1011 21:01:39.213695  283686 system_pods.go:61] "registry-proxy-hxsb7" [9f05d6fb-3f2f-4840-a6f5-392af1bf7e10] Running
	I1011 21:01:39.213705  283686 system_pods.go:61] "snapshot-controller-56fcc65765-5ldbm" [2c8ed9f6-cfa8-44fc-aa89-06743412532e] Running
	I1011 21:01:39.213709  283686 system_pods.go:61] "snapshot-controller-56fcc65765-df6h6" [14e51f4a-4153-44fa-a8e0-9db0a24b48d7] Running
	I1011 21:01:39.213713  283686 system_pods.go:61] "storage-provisioner" [f1e91d7e-5124-4e47-9e2f-6ef18efad060] Running
	I1011 21:01:39.213725  283686 system_pods.go:74] duration metric: took 11.166248362s to wait for pod list to return data ...
	I1011 21:01:39.213737  283686 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:01:39.216674  283686 default_sa.go:45] found service account: "default"
	I1011 21:01:39.216702  283686 default_sa.go:55] duration metric: took 2.958981ms for default service account to be created ...
	I1011 21:01:39.216713  283686 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:01:39.228323  283686 system_pods.go:86] 18 kube-system pods found
	I1011 21:01:39.228422  283686 system_pods.go:89] "coredns-7c65d6cfc9-rsfcm" [996e1047-8f10-483c-b830-62ec9c4b730f] Running
	I1011 21:01:39.228447  283686 system_pods.go:89] "csi-hostpath-attacher-0" [87ab25f5-e4b7-4fca-9c15-c48d19c12b6b] Running
	I1011 21:01:39.228470  283686 system_pods.go:89] "csi-hostpath-resizer-0" [a69f5a66-0e63-4da4-a58d-9b45ff6cea64] Running
	I1011 21:01:39.228505  283686 system_pods.go:89] "csi-hostpathplugin-62fx7" [86c47021-238a-4871-ac72-f78324ed2dd6] Running
	I1011 21:01:39.228551  283686 system_pods.go:89] "etcd-addons-627736" [827639aa-3bdc-40ac-aa45-a6fea950ca93] Running
	I1011 21:01:39.228575  283686 system_pods.go:89] "kindnet-dl4r6" [062ac268-a384-40a2-a21f-958b9a3a66b1] Running
	I1011 21:01:39.228604  283686 system_pods.go:89] "kube-apiserver-addons-627736" [995afcf6-521b-49ba-a610-46c76edc3841] Running
	I1011 21:01:39.228639  283686 system_pods.go:89] "kube-controller-manager-addons-627736" [8a9b26d8-92ce-4ff1-930e-b4b9d34f5b9c] Running
	I1011 21:01:39.228666  283686 system_pods.go:89] "kube-ingress-dns-minikube" [9ee3781e-ba5e-4b03-a5f5-cc32cc20407b] Running
	I1011 21:01:39.228686  283686 system_pods.go:89] "kube-proxy-p49c6" [995ebad4-48a5-48d5-a2aa-aef4671e5f5f] Running
	I1011 21:01:39.228716  283686 system_pods.go:89] "kube-scheduler-addons-627736" [2878cfca-4eed-4105-8c19-850954387751] Running
	I1011 21:01:39.228741  283686 system_pods.go:89] "metrics-server-84c5f94fbc-96mlh" [6cae95da-c64a-42fb-a86c-a65aa4fa0447] Running
	I1011 21:01:39.228764  283686 system_pods.go:89] "nvidia-device-plugin-daemonset-p9nsd" [41af943b-e0c9-4974-aa28-297cadfc3d28] Running
	I1011 21:01:39.228790  283686 system_pods.go:89] "registry-66c9cd494c-p6l9v" [0674412c-ee63-4347-b013-fcbb85bd1f6a] Running
	I1011 21:01:39.228820  283686 system_pods.go:89] "registry-proxy-hxsb7" [9f05d6fb-3f2f-4840-a6f5-392af1bf7e10] Running
	I1011 21:01:39.228848  283686 system_pods.go:89] "snapshot-controller-56fcc65765-5ldbm" [2c8ed9f6-cfa8-44fc-aa89-06743412532e] Running
	I1011 21:01:39.228871  283686 system_pods.go:89] "snapshot-controller-56fcc65765-df6h6" [14e51f4a-4153-44fa-a8e0-9db0a24b48d7] Running
	I1011 21:01:39.228897  283686 system_pods.go:89] "storage-provisioner" [f1e91d7e-5124-4e47-9e2f-6ef18efad060] Running
	I1011 21:01:39.228934  283686 system_pods.go:126] duration metric: took 12.213856ms to wait for k8s-apps to be running ...
	I1011 21:01:39.228970  283686 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:01:39.229073  283686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:01:39.244661  283686 system_svc.go:56] duration metric: took 15.681935ms WaitForService to wait for kubelet
	I1011 21:01:39.244734  283686 kubeadm.go:582] duration metric: took 2m43.719104575s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:01:39.244761  283686 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:01:39.248173  283686 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1011 21:01:39.248208  283686 node_conditions.go:123] node cpu capacity is 2
	I1011 21:01:39.248221  283686 node_conditions.go:105] duration metric: took 3.453827ms to run NodePressure ...
	I1011 21:01:39.248231  283686 start.go:241] waiting for startup goroutines ...
	I1011 21:01:39.248273  283686 start.go:246] waiting for cluster config update ...
	I1011 21:01:39.248300  283686 start.go:255] writing updated cluster config ...
	I1011 21:01:39.248636  283686 ssh_runner.go:195] Run: rm -f paused
	I1011 21:01:39.648489  283686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:01:39.651504  283686 out.go:177] * Done! kubectl is now configured to use "addons-627736" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:06:05 addons-627736 crio[963]: time="2024-10-11 21:06:05.583956878Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-c95cv Namespace:ingress-nginx ID:8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526 UID:5d0f4139-0b87-483c-bf4d-cfcd8c115bc0 NetNS:/var/run/netns/f2dec241-2686-4c08-a105-b214c0e73671 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 11 21:06:05 addons-627736 crio[963]: time="2024-10-11 21:06:05.584102269Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-c95cv from CNI network \"kindnet\" (type=ptp)"
	Oct 11 21:06:05 addons-627736 crio[963]: time="2024-10-11 21:06:05.612473236Z" level=info msg="Stopped pod sandbox: 8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526" id=2cb30068-0b0b-462a-8eee-735e6b555f8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:05 addons-627736 crio[963]: time="2024-10-11 21:06:05.753683821Z" level=info msg="Removing container: 7b4b2ad554bf6cf766e5b8fc6c1220d4717fe40708d12293ece9b69fe4df411c" id=73e077b1-9fb8-4c1e-9562-d6cc43d09793 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:05 addons-627736 crio[963]: time="2024-10-11 21:06:05.771042795Z" level=info msg="Removed container 7b4b2ad554bf6cf766e5b8fc6c1220d4717fe40708d12293ece9b69fe4df411c: ingress-nginx/ingress-nginx-controller-5f85ff4588-c95cv/controller" id=73e077b1-9fb8-4c1e-9562-d6cc43d09793 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.757904487Z" level=info msg="Removing container: 88f6f9dcbf080410b52c596f970b87ca8a74ffc95d4c1773f1fbd6bf2c069fed" id=a3d56c96-cf3c-47fd-a718-7aceae547a0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.783795426Z" level=info msg="Removed container 88f6f9dcbf080410b52c596f970b87ca8a74ffc95d4c1773f1fbd6bf2c069fed: ingress-nginx/ingress-nginx-admission-patch-h4f2j/patch" id=a3d56c96-cf3c-47fd-a718-7aceae547a0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.785196240Z" level=info msg="Removing container: 64b8cb7ba62e1d9a623376b25dea1163b2fa7325f102b0f1fc41eba569a4fee2" id=656f3c74-8c29-4f35-8cc3-9927f4e71c62 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.810257114Z" level=info msg="Removed container 64b8cb7ba62e1d9a623376b25dea1163b2fa7325f102b0f1fc41eba569a4fee2: ingress-nginx/ingress-nginx-admission-create-vrswx/create" id=656f3c74-8c29-4f35-8cc3-9927f4e71c62 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.811672451Z" level=info msg="Stopping pod sandbox: 3923ca91694766b20deb062ca1988b8e721813434a1e7f5b47eeb355bbbdc244" id=69f0fc83-3911-4ed5-b8dd-247eef8db999 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.811713779Z" level=info msg="Stopped pod sandbox (already stopped): 3923ca91694766b20deb062ca1988b8e721813434a1e7f5b47eeb355bbbdc244" id=69f0fc83-3911-4ed5-b8dd-247eef8db999 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.812030103Z" level=info msg="Removing pod sandbox: 3923ca91694766b20deb062ca1988b8e721813434a1e7f5b47eeb355bbbdc244" id=1da0e5fb-7877-4961-b395-1c6505b0ed11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.822421493Z" level=info msg="Removed pod sandbox: 3923ca91694766b20deb062ca1988b8e721813434a1e7f5b47eeb355bbbdc244" id=1da0e5fb-7877-4961-b395-1c6505b0ed11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.822942029Z" level=info msg="Stopping pod sandbox: 8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526" id=8c441b3d-683d-441b-8ae2-5470d8329f0e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.822977490Z" level=info msg="Stopped pod sandbox (already stopped): 8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526" id=8c441b3d-683d-441b-8ae2-5470d8329f0e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.823279652Z" level=info msg="Removing pod sandbox: 8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526" id=2a35914c-a982-4824-8630-8bba9c4ece3e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.832045209Z" level=info msg="Removed pod sandbox: 8400a8cff258d7b44f06c06c8c24aa5e2fb71bde3db8783ecfa2fb5636f88526" id=2a35914c-a982-4824-8630-8bba9c4ece3e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.832552715Z" level=info msg="Stopping pod sandbox: 270e8b82f28f61c8e771e379840be79ec89d969b8b81034abd12ed040ec3f0c0" id=255c8393-ec26-4eaa-8772-f036f180f64c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.832589883Z" level=info msg="Stopped pod sandbox (already stopped): 270e8b82f28f61c8e771e379840be79ec89d969b8b81034abd12ed040ec3f0c0" id=255c8393-ec26-4eaa-8772-f036f180f64c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.833077484Z" level=info msg="Removing pod sandbox: 270e8b82f28f61c8e771e379840be79ec89d969b8b81034abd12ed040ec3f0c0" id=3f7a59d8-5085-4d54-9bb5-af635cedc54a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.842269597Z" level=info msg="Removed pod sandbox: 270e8b82f28f61c8e771e379840be79ec89d969b8b81034abd12ed040ec3f0c0" id=3f7a59d8-5085-4d54-9bb5-af635cedc54a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.842978616Z" level=info msg="Stopping pod sandbox: dad0d33829e898aa9c25ce1a95ca49a817dc08dcf4de543efe68cca1a25fd1c9" id=19d1ef9e-141b-45b3-a917-df9a55227792 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.843072086Z" level=info msg="Stopped pod sandbox (already stopped): dad0d33829e898aa9c25ce1a95ca49a817dc08dcf4de543efe68cca1a25fd1c9" id=19d1ef9e-141b-45b3-a917-df9a55227792 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.843301323Z" level=info msg="Removing pod sandbox: dad0d33829e898aa9c25ce1a95ca49a817dc08dcf4de543efe68cca1a25fd1c9" id=509d70c4-cd98-446d-a95c-38fe5461dac1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 11 21:06:50 addons-627736 crio[963]: time="2024-10-11 21:06:50.853954220Z" level=info msg="Removed pod sandbox: dad0d33829e898aa9c25ce1a95ca49a817dc08dcf4de543efe68cca1a25fd1c9" id=509d70c4-cd98-446d-a95c-38fe5461dac1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16121ee7e05b2       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   00fd4929d9503       hello-world-app-55bf9c44b4-w257g
	c3aec7d30a326       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   135524c97e22c       nginx
	947c9d648a4a3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   a6609a9a15c6c       busybox
	690907b416de7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        8 minutes ago       Running             local-path-provisioner    0                   08b2565019574       local-path-provisioner-86d989889c-nhfdh
	45f31afe6d34c       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   8 minutes ago       Running             metrics-server            0                   4ac405f0f5e95       metrics-server-84c5f94fbc-96mlh
	381fa28b97303       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   b17ecc6108a50       storage-provisioner
	4cc9120dd28ec       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        8 minutes ago       Running             coredns                   0                   49e9bd477edf2       coredns-7c65d6cfc9-rsfcm
	aefe62e0ae416       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                      9 minutes ago       Running             kindnet-cni               0                   aad151751b198       kindnet-dl4r6
	b1b0f6640b0b2       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        9 minutes ago       Running             kube-proxy                0                   c8a95826fbea3       kube-proxy-p49c6
	44786c037f505       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        9 minutes ago       Running             kube-controller-manager   0                   27d89d1299f3e       kube-controller-manager-addons-627736
	b1eae13f5a89d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago       Running             etcd                      0                   25a7fa043c3e7       etcd-addons-627736
	52a847d70739c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        9 minutes ago       Running             kube-scheduler            0                   3889f77f1c862       kube-scheduler-addons-627736
	98ba21f18fbc7       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        9 minutes ago       Running             kube-apiserver            0                   734c3441632e0       kube-apiserver-addons-627736
	
	
	==> coredns [4cc9120dd28ec0dcf6a14f15344633b2bfefeb64b08d9aa4e7046e83a4954fab] <==
	[INFO] 10.244.0.20:52131 - 27025 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061848s
	[INFO] 10.244.0.20:56755 - 46167 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003745881s
	[INFO] 10.244.0.20:52131 - 19807 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003502975s
	[INFO] 10.244.0.20:56755 - 49445 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002352492s
	[INFO] 10.244.0.20:52131 - 45563 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002624558s
	[INFO] 10.244.0.20:52131 - 51984 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000212696s
	[INFO] 10.244.0.20:56755 - 33612 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080785s
	[INFO] 10.244.0.20:55420 - 51361 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000142101s
	[INFO] 10.244.0.20:55420 - 10910 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070661s
	[INFO] 10.244.0.20:59899 - 61271 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037677s
	[INFO] 10.244.0.20:55420 - 13476 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047916s
	[INFO] 10.244.0.20:59899 - 47324 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000135258s
	[INFO] 10.244.0.20:55420 - 58517 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075542s
	[INFO] 10.244.0.20:55420 - 45950 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064761s
	[INFO] 10.244.0.20:55420 - 14335 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062251s
	[INFO] 10.244.0.20:59899 - 53919 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076035s
	[INFO] 10.244.0.20:59899 - 54469 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078086s
	[INFO] 10.244.0.20:55420 - 64804 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002085653s
	[INFO] 10.244.0.20:59899 - 52316 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125831s
	[INFO] 10.244.0.20:59899 - 52143 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062669s
	[INFO] 10.244.0.20:55420 - 18378 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001961315s
	[INFO] 10.244.0.20:59899 - 37871 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001812832s
	[INFO] 10.244.0.20:55420 - 56744 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000150092s
	[INFO] 10.244.0.20:59899 - 40523 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00149385s
	[INFO] 10.244.0.20:59899 - 56612 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088384s
	
	
	==> describe nodes <==
	Name:               addons-627736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-627736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=addons-627736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T20_58_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-627736
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 20:58:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-627736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:08:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:06:28 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:06:28 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:06:28 +0000   Fri, 11 Oct 2024 20:58:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:06:28 +0000   Fri, 11 Oct 2024 20:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-627736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 11b52b7913f542f1a28c3241c35ea74a
	  System UUID:                9b6d6844-2b7e-4842-b3ee-0008fd8800bf
	  Boot ID:                    cbc008aa-cc36-43a1-a971-3215ed2e69cb
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  default                     hello-world-app-55bf9c44b4-w257g           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 coredns-7c65d6cfc9-rsfcm                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m29s
	  kube-system                 etcd-addons-627736                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m34s
	  kube-system                 kindnet-dl4r6                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m29s
	  kube-system                 kube-apiserver-addons-627736               250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 kube-controller-manager-addons-627736      200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 kube-proxy-p49c6                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 kube-scheduler-addons-627736               100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 metrics-server-84c5f94fbc-96mlh            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         9m23s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  local-path-storage          local-path-provisioner-86d989889c-nhfdh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m22s                  kube-proxy       
	  Normal   Starting                 9m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m41s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m41s (x8 over 9m41s)  kubelet          Node addons-627736 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m41s (x8 over 9m41s)  kubelet          Node addons-627736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m41s (x7 over 9m41s)  kubelet          Node addons-627736 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m34s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m34s                  kubelet          Node addons-627736 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m34s                  kubelet          Node addons-627736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m34s                  kubelet          Node addons-627736 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m30s                  node-controller  Node addons-627736 event: Registered Node addons-627736 in Controller
	  Normal   NodeReady                8m42s                  kubelet          Node addons-627736 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015629] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.448894] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049457] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016122] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.649193] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.194454] kauditd_printk_skb: 34 callbacks suppressed
	[Oct11 19:26] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct11 19:59] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.264105] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [b1eae13f5a89d69ae6956f2f838199493c3d8512271d75bd951f43c94a02962b] <==
	{"level":"info","ts":"2024-10-11T20:58:59.202522Z","caller":"traceutil/trace.go:171","msg":"trace[1249837494] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:404; }","duration":"164.586849ms","start":"2024-10-11T20:58:59.037806Z","end":"2024-10-11T20:58:59.202393Z","steps":["trace[1249837494] 'read index received'  (duration: 282.905µs)","trace[1249837494] 'applied index is now lower than readState.Index'  (duration: 164.303394ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-11T20:58:59.205711Z","caller":"traceutil/trace.go:171","msg":"trace[673274089] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"384.526256ms","start":"2024-10-11T20:58:58.820321Z","end":"2024-10-11T20:58:59.204848Z","steps":["trace[673274089] 'process raft request'  (duration: 299.565359ms)","trace[673274089] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/pods/kube-system/kube-scheduler-addons-627736; req_size:4470; } (duration: 82.298852ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.207044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.820302Z","time spent":"386.531145ms","remote":"127.0.0.1:49888","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4473,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" mod_revision:307 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" value_size:4410 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-addons-627736\" > >"}
	{"level":"info","ts":"2024-10-11T20:58:59.207254Z","caller":"traceutil/trace.go:171","msg":"trace[706224663] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"386.763557ms","start":"2024-10-11T20:58:58.820479Z","end":"2024-10-11T20:58:59.207242Z","steps":["trace[706224663] 'process raft request'  (duration: 381.842041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.207311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.820465Z","time spent":"386.809102ms","remote":"127.0.0.1:49976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" mod_revision:59 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cplqhdyhsvi6z23rwkln5suh7i\" > >"}
	{"level":"info","ts":"2024-10-11T20:58:59.263129Z","caller":"traceutil/trace.go:171","msg":"trace[1140111483] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"143.888588ms","start":"2024-10-11T20:58:59.119226Z","end":"2024-10-11T20:58:59.263114Z","steps":["trace[1140111483] 'process raft request'  (duration: 87.690615ms)","trace[1140111483] 'compare'  (duration: 55.52637ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.263303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.875855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:58:59.263338Z","caller":"traceutil/trace.go:171","msg":"trace[1102903783] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:400; }","duration":"143.914188ms","start":"2024-10-11T20:58:59.119417Z","end":"2024-10-11T20:58:59.263331Z","steps":["trace[1102903783] 'agreement among raft nodes before linearized reading'  (duration: 143.861931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.262900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.258601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-11T20:58:59.263766Z","caller":"traceutil/trace.go:171","msg":"trace[2126669368] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:399; }","duration":"469.136885ms","start":"2024-10-11T20:58:58.794617Z","end":"2024-10-11T20:58:59.263754Z","steps":["trace[2126669368] 'agreement among raft nodes before linearized reading'  (duration: 415.120141ms)","trace[2126669368] 'range keys from in-memory index tree'  (duration: 53.085325ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T20:58:59.263808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T20:58:58.794303Z","time spent":"469.486889ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"warn","ts":"2024-10-11T20:58:59.649987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.48397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-627736\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-10-11T20:58:59.650167Z","caller":"traceutil/trace.go:171","msg":"trace[693198531] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-627736; range_end:; response_count:1; response_revision:411; }","duration":"104.992436ms","start":"2024-10-11T20:58:59.545158Z","end":"2024-10-11T20:58:59.650150Z","steps":["trace[693198531] 'agreement among raft nodes before linearized reading'  (duration: 100.307368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.671711Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.364486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:58:59.672285Z","caller":"traceutil/trace.go:171","msg":"trace[1549218058] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:411; }","duration":"101.933454ms","start":"2024-10-11T20:58:59.570326Z","end":"2024-10-11T20:58:59.672260Z","steps":["trace[1549218058] 'agreement among raft nodes before linearized reading'  (duration: 101.352466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:58:59.672499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.820186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-10-11T20:58:59.689787Z","caller":"traceutil/trace.go:171","msg":"trace[174889307] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:411; }","duration":"119.103438ms","start":"2024-10-11T20:58:59.570668Z","end":"2024-10-11T20:58:59.689772Z","steps":["trace[174889307] 'agreement among raft nodes before linearized reading'  (duration: 101.793856ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T20:59:01.026175Z","caller":"traceutil/trace.go:171","msg":"trace[573435994] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"101.369147ms","start":"2024-10-11T20:59:00.924781Z","end":"2024-10-11T20:59:01.026150Z","steps":["trace[573435994] 'process raft request'  (duration: 93.743233ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T20:59:01.026569Z","caller":"traceutil/trace.go:171","msg":"trace[1305598899] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"101.682172ms","start":"2024-10-11T20:59:00.924876Z","end":"2024-10-11T20:59:01.026558Z","steps":["trace[1305598899] 'process raft request'  (duration: 93.673573ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.026725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.984703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:59:01.026756Z","caller":"traceutil/trace.go:171","msg":"trace[535795001] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:453; }","duration":"102.033079ms","start":"2024-10-11T20:59:00.924716Z","end":"2024-10-11T20:59:01.026749Z","steps":["trace[535795001] 'agreement among raft nodes before linearized reading'  (duration: 101.966209ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.027270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.704461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-10-11T20:59:01.027318Z","caller":"traceutil/trace.go:171","msg":"trace[2123143941] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:453; }","duration":"102.757292ms","start":"2024-10-11T20:59:00.924552Z","end":"2024-10-11T20:59:01.027310Z","steps":["trace[2123143941] 'agreement among raft nodes before linearized reading'  (duration: 102.622789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T20:59:01.027457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.005842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T20:59:01.027490Z","caller":"traceutil/trace.go:171","msg":"trace[142831583] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:453; }","duration":"103.033157ms","start":"2024-10-11T20:59:00.924444Z","end":"2024-10-11T20:59:01.027477Z","steps":["trace[142831583] 'agreement among raft nodes before linearized reading'  (duration: 102.991107ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:08:24 up  2:50,  0 users,  load average: 0.19, 0.55, 0.53
	Linux addons-627736 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [aefe62e0ae41697ccac35c4f179dc5c81e40073dcbd6beec28dc1196b01b36d1] <==
	I1011 21:06:21.732672       1 main.go:300] handling current node
	I1011 21:06:31.723762       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:06:31.723888       1 main.go:300] handling current node
	I1011 21:06:41.726752       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:06:41.726887       1 main.go:300] handling current node
	I1011 21:06:51.724533       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:06:51.724565       1 main.go:300] handling current node
	I1011 21:07:01.724204       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:01.724234       1 main.go:300] handling current node
	I1011 21:07:11.730073       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:11.730107       1 main.go:300] handling current node
	I1011 21:07:21.731430       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:21.731462       1 main.go:300] handling current node
	I1011 21:07:31.731651       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:31.731685       1 main.go:300] handling current node
	I1011 21:07:41.730928       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:41.730962       1 main.go:300] handling current node
	I1011 21:07:51.725218       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:07:51.725251       1 main.go:300] handling current node
	I1011 21:08:01.724539       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:08:01.724571       1 main.go:300] handling current node
	I1011 21:08:11.729811       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:08:11.729850       1 main.go:300] handling current node
	I1011 21:08:21.729183       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:08:21.729303       1 main.go:300] handling current node
	
	
	==> kube-apiserver [98ba21f18fbc72ce3b7e796c4133c957df0d9d1cd975b5e869be80d24f3cbb78] <==
	 > logger="UnhandledError"
	E1011 21:01:05.295578       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.164.40:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.164.40:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.164.40:443: connect: connection refused" logger="UnhandledError"
	I1011 21:01:05.362892       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1011 21:01:50.667244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51094: use of closed network connection
	E1011 21:01:50.908243       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51134: use of closed network connection
	E1011 21:01:51.061439       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51142: use of closed network connection
	E1011 21:02:16.959242       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1011 21:02:25.450041       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.79.233"}
	I1011 21:02:48.661028       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1011 21:03:19.580521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.581669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.625633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.625807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.651968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.652773       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:19.751605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:19.752094       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1011 21:03:20.654410       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1011 21:03:20.752087       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1011 21:03:20.862001       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1011 21:03:33.330447       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1011 21:03:34.357473       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1011 21:03:38.863277       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1011 21:03:39.173708       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.46.91"}
	I1011 21:05:57.211289       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.110.249"}
	
	
	==> kube-controller-manager [44786c037f5054fe50ce52477ece70a3a1b44eb12296187ea90886c4f5c2d432] <==
	E1011 21:06:11.123386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1011 21:06:12.498553       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1011 21:06:14.024746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:14.024790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1011 21:06:28.937565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-627736"
	W1011 21:06:33.056044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:33.056099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:42.782492       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:42.782536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:49.561955       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:49.561997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:55.053699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:55.053744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:16.683316       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:16.683359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:25.295796       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:25.295834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:45.206437       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:45.206484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:51.059041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:51.059083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:52.391390       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:52.391430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:08:15.405332       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:08:15.405378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [b1b0f6640b0b247d6cd5ee803f1950cb27b1115ce602aeaf5d6b66ce6d532b8e] <==
	I1011 20:59:00.859908       1 server_linux.go:66] "Using iptables proxy"
	I1011 20:59:01.400312       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1011 20:59:01.434619       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 20:59:01.832769       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 20:59:01.832849       1 server_linux.go:169] "Using iptables Proxier"
	I1011 20:59:01.925993       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 20:59:01.928942       1 server.go:483] "Version info" version="v1.31.1"
	I1011 20:59:01.929042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 20:59:01.934370       1 config.go:199] "Starting service config controller"
	I1011 20:59:01.934941       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 20:59:01.935028       1 config.go:105] "Starting endpoint slice config controller"
	I1011 20:59:01.935035       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 20:59:01.935510       1 config.go:328] "Starting node config controller"
	I1011 20:59:01.935518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 20:59:02.068717       1 shared_informer.go:320] Caches are synced for node config
	I1011 20:59:02.068821       1 shared_informer.go:320] Caches are synced for service config
	I1011 20:59:02.068881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [52a847d70739c35051459efeaa1d882d9e84a8e27fe0d2e8f61cd6d9acfed015] <==
	W1011 20:58:48.801939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 20:58:48.802589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.801989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:58:48.802694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.802036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 20:58:48.802784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.802495       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 20:58:48.802896       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 20:58:48.806235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:58:48.806312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 20:58:48.806476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806551       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 20:58:48.806614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:48.806732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 20:58:48.806875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 20:58:48.806993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.806960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:48.807027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:48.807077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 20:58:48.807124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1011 20:58:50.198675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:06:40 addons-627736 kubelet[1500]: E1011 21:06:40.633134    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680800632872442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:06:50 addons-627736 kubelet[1500]: E1011 21:06:50.635723    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680810635468658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:06:50 addons-627736 kubelet[1500]: E1011 21:06:50.635763    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680810635468658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:06:50 addons-627736 kubelet[1500]: I1011 21:06:50.756689    1500 scope.go:117] "RemoveContainer" containerID="88f6f9dcbf080410b52c596f970b87ca8a74ffc95d4c1773f1fbd6bf2c069fed"
	Oct 11 21:06:50 addons-627736 kubelet[1500]: I1011 21:06:50.784050    1500 scope.go:117] "RemoveContainer" containerID="64b8cb7ba62e1d9a623376b25dea1163b2fa7325f102b0f1fc41eba569a4fee2"
	Oct 11 21:07:00 addons-627736 kubelet[1500]: E1011 21:07:00.638352    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680820638125257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:00 addons-627736 kubelet[1500]: E1011 21:07:00.638398    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680820638125257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:09 addons-627736 kubelet[1500]: I1011 21:07:09.252450    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:07:10 addons-627736 kubelet[1500]: E1011 21:07:10.641409    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680830641177500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:10 addons-627736 kubelet[1500]: E1011 21:07:10.641856    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680830641177500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:20 addons-627736 kubelet[1500]: E1011 21:07:20.644617    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680840644417084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:20 addons-627736 kubelet[1500]: E1011 21:07:20.644664    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680840644417084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:30 addons-627736 kubelet[1500]: E1011 21:07:30.654773    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680850652794206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:30 addons-627736 kubelet[1500]: E1011 21:07:30.654825    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680850652794206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:40 addons-627736 kubelet[1500]: E1011 21:07:40.657613    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680860657377739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:40 addons-627736 kubelet[1500]: E1011 21:07:40.657647    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680860657377739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:50 addons-627736 kubelet[1500]: E1011 21:07:50.660689    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680870660435803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:50 addons-627736 kubelet[1500]: E1011 21:07:50.660727    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680870660435803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:00 addons-627736 kubelet[1500]: E1011 21:08:00.663893    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680880663658063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:00 addons-627736 kubelet[1500]: E1011 21:08:00.663933    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680880663658063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:10 addons-627736 kubelet[1500]: I1011 21:08:10.253393    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:08:10 addons-627736 kubelet[1500]: E1011 21:08:10.667144    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680890666896813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:10 addons-627736 kubelet[1500]: E1011 21:08:10.667179    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680890666896813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:20 addons-627736 kubelet[1500]: E1011 21:08:20.670128    1500 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680900669895662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:20 addons-627736 kubelet[1500]: E1011 21:08:20.670160    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680900669895662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606950,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [381fa28b97303cf241449693f03a0ef78f01313a4347175b735a1dc510847596] <==
	I1011 20:59:42.845004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 20:59:42.939797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 20:59:42.939929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 20:59:42.976569       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 20:59:42.977397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c!
	I1011 20:59:42.978825       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d68dabc9-4d42-4ae2-86e5-41350b7a4f68", APIVersion:"v1", ResourceVersion:"920", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c became leader
	I1011 20:59:43.078057       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-627736_29ba4c93-c83a-438b-94c7-7f2b7d10ae2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-627736 -n addons-627736
helpers_test.go:261: (dbg) Run:  kubectl --context addons-627736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (343.93s)

                                                
                                    

Test pass (297/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.58
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 6.24
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 215.84
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/PullSecret 10.87
34 TestAddons/parallel/Registry 18.42
36 TestAddons/parallel/InspektorGadget 11.7
39 TestAddons/parallel/CSI 58.82
40 TestAddons/parallel/Headlamp 17.8
41 TestAddons/parallel/CloudSpanner 6.63
42 TestAddons/parallel/LocalPath 10.15
43 TestAddons/parallel/NvidiaDevicePlugin 6.67
44 TestAddons/parallel/Yakd 11.79
46 TestAddons/StoppedEnableDisable 12.18
47 TestCertOptions 39.64
48 TestCertExpiration 259.01
50 TestForceSystemdFlag 32.79
51 TestForceSystemdEnv 37.22
57 TestErrorSpam/setup 32.14
58 TestErrorSpam/start 0.7
59 TestErrorSpam/status 0.99
60 TestErrorSpam/pause 1.69
61 TestErrorSpam/unpause 1.89
62 TestErrorSpam/stop 1.45
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 48.18
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 29.87
69 TestFunctional/serial/KubeContext 0.06
70 TestFunctional/serial/KubectlGetPods 0.1
73 TestFunctional/serial/CacheCmd/cache/add_remote 4.36
74 TestFunctional/serial/CacheCmd/cache/add_local 1.41
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
76 TestFunctional/serial/CacheCmd/cache/list 0.06
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
79 TestFunctional/serial/CacheCmd/cache/delete 0.12
80 TestFunctional/serial/MinikubeKubectlCmd 0.14
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
82 TestFunctional/serial/ExtraConfig 36.07
83 TestFunctional/serial/ComponentHealth 0.1
84 TestFunctional/serial/LogsCmd 1.7
85 TestFunctional/serial/LogsFileCmd 1.74
86 TestFunctional/serial/InvalidService 4.37
88 TestFunctional/parallel/ConfigCmd 0.55
89 TestFunctional/parallel/DashboardCmd 8.81
90 TestFunctional/parallel/DryRun 0.45
91 TestFunctional/parallel/InternationalLanguage 0.18
92 TestFunctional/parallel/StatusCmd 0.97
96 TestFunctional/parallel/ServiceCmdConnect 11.72
97 TestFunctional/parallel/AddonsCmd 0.2
98 TestFunctional/parallel/PersistentVolumeClaim 26.93
100 TestFunctional/parallel/SSHCmd 0.65
101 TestFunctional/parallel/CpCmd 2.17
103 TestFunctional/parallel/FileSync 0.35
104 TestFunctional/parallel/CertSync 2.16
108 TestFunctional/parallel/NodeLabels 0.16
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
112 TestFunctional/parallel/License 0.41
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
126 TestFunctional/parallel/ProfileCmd/profile_list 0.44
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
128 TestFunctional/parallel/MountCmd/any-port 9.19
129 TestFunctional/parallel/ServiceCmd/List 0.61
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
132 TestFunctional/parallel/ServiceCmd/Format 0.42
133 TestFunctional/parallel/ServiceCmd/URL 0.42
134 TestFunctional/parallel/MountCmd/specific-port 1.59
135 TestFunctional/parallel/MountCmd/VerifyCleanup 3.02
136 TestFunctional/parallel/Version/short 0.07
137 TestFunctional/parallel/Version/components 0.89
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
143 TestFunctional/parallel/ImageCommands/Setup 0.81
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.58
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 173.28
161 TestMultiControlPlane/serial/DeployApp 8.92
162 TestMultiControlPlane/serial/PingHostFromPods 1.59
163 TestMultiControlPlane/serial/AddWorkerNode 44.28
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
166 TestMultiControlPlane/serial/CopyFile 18.55
167 TestMultiControlPlane/serial/StopSecondaryNode 12.75
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
169 TestMultiControlPlane/serial/RestartSecondaryNode 23.16
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.32
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 191.2
172 TestMultiControlPlane/serial/DeleteSecondaryNode 12.55
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
174 TestMultiControlPlane/serial/StopCluster 35.73
175 TestMultiControlPlane/serial/RestartCluster 79.3
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
177 TestMultiControlPlane/serial/AddSecondaryNode 75.78
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
182 TestJSONOutput/start/Command 50.48
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.87
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.72
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.92
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
207 TestKicCustomNetwork/create_custom_network 38.99
208 TestKicCustomNetwork/use_default_bridge_network 31.69
209 TestKicExistingNetwork 31.7
210 TestKicCustomSubnet 34.58
211 TestKicStaticIP 34.81
212 TestMainNoArgs 0.06
213 TestMinikubeProfile 68.72
216 TestMountStart/serial/StartWithMountFirst 6.85
217 TestMountStart/serial/VerifyMountFirst 0.26
218 TestMountStart/serial/StartWithMountSecond 7.23
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.61
221 TestMountStart/serial/VerifyMountPostDelete 0.25
222 TestMountStart/serial/Stop 1.2
223 TestMountStart/serial/RestartStopped 7.63
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 74.99
228 TestMultiNode/serial/DeployApp2Nodes 6.93
229 TestMultiNode/serial/PingHostFrom2Pods 0.97
230 TestMultiNode/serial/AddNode 30.23
231 TestMultiNode/serial/MultiNodeLabels 0.09
232 TestMultiNode/serial/ProfileList 0.66
233 TestMultiNode/serial/CopyFile 9.72
234 TestMultiNode/serial/StopNode 2.24
235 TestMultiNode/serial/StartAfterStop 9.7
236 TestMultiNode/serial/RestartKeepsNodes 101.39
237 TestMultiNode/serial/DeleteNode 5.4
238 TestMultiNode/serial/StopMultiNode 23.82
239 TestMultiNode/serial/RestartMultiNode 48.05
240 TestMultiNode/serial/ValidateNameConflict 34.33
245 TestPreload 124.84
247 TestScheduledStopUnix 109.98
250 TestInsufficientStorage 10.77
251 TestRunningBinaryUpgrade 82.38
253 TestKubernetesUpgrade 387.86
254 TestMissingContainerUpgrade 156.98
256 TestPause/serial/Start 60.74
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
259 TestNoKubernetes/serial/StartWithK8s 42.34
260 TestNoKubernetes/serial/StartWithStopK8s 7.69
261 TestNoKubernetes/serial/Start 8.22
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
263 TestNoKubernetes/serial/ProfileList 1.07
264 TestNoKubernetes/serial/Stop 1.24
265 TestPause/serial/SecondStartNoReconfiguration 23.78
266 TestNoKubernetes/serial/StartNoArgs 7.37
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
268 TestPause/serial/Pause 1.09
269 TestPause/serial/VerifyStatus 0.44
270 TestPause/serial/Unpause 0.91
271 TestPause/serial/PauseAgain 1.24
272 TestPause/serial/DeletePaused 3.46
273 TestPause/serial/VerifyDeletedResources 0.14
274 TestStoppedBinaryUpgrade/Setup 1.06
275 TestStoppedBinaryUpgrade/Upgrade 73.03
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
291 TestNetworkPlugins/group/false 3.97
296 TestStartStop/group/old-k8s-version/serial/FirstStart 155.34
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.59
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
299 TestStartStop/group/old-k8s-version/serial/Stop 12.01
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
301 TestStartStop/group/old-k8s-version/serial/SecondStart 37.19
303 TestStartStop/group/no-preload/serial/FirstStart 66.84
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 29.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
307 TestStartStop/group/old-k8s-version/serial/Pause 3.61
309 TestStartStop/group/embed-certs/serial/FirstStart 52.76
310 TestStartStop/group/no-preload/serial/DeployApp 11.46
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
312 TestStartStop/group/no-preload/serial/Stop 12.12
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/no-preload/serial/SecondStart 305.31
315 TestStartStop/group/embed-certs/serial/DeployApp 11.46
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.63
317 TestStartStop/group/embed-certs/serial/Stop 12.52
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/embed-certs/serial/SecondStart 281.18
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/Pause 2.93
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.39
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
329 TestStartStop/group/embed-certs/serial/Pause 4.17
331 TestStartStop/group/newest-cni/serial/FirstStart 43.13
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
334 TestStartStop/group/newest-cni/serial/Stop 1.32
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
336 TestStartStop/group/newest-cni/serial/SecondStart 18.67
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.4
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.66
339 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.53
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
343 TestStartStop/group/newest-cni/serial/Pause 3.15
344 TestNetworkPlugins/group/auto/Start 56.67
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 274.62
347 TestNetworkPlugins/group/auto/KubeletFlags 0.32
348 TestNetworkPlugins/group/auto/NetCatPod 13.28
349 TestNetworkPlugins/group/auto/DNS 0.22
350 TestNetworkPlugins/group/auto/Localhost 0.15
351 TestNetworkPlugins/group/auto/HairPin 0.16
352 TestNetworkPlugins/group/kindnet/Start 50.32
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
355 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
356 TestNetworkPlugins/group/kindnet/DNS 0.17
357 TestNetworkPlugins/group/kindnet/Localhost 0.15
358 TestNetworkPlugins/group/kindnet/HairPin 0.15
359 TestNetworkPlugins/group/calico/Start 60.85
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.3
362 TestNetworkPlugins/group/calico/NetCatPod 10.25
363 TestNetworkPlugins/group/calico/DNS 0.21
364 TestNetworkPlugins/group/calico/Localhost 0.18
365 TestNetworkPlugins/group/calico/HairPin 0.16
366 TestNetworkPlugins/group/custom-flannel/Start 62.5
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.7
371 TestNetworkPlugins/group/enable-default-cni/Start 48.16
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
376 TestNetworkPlugins/group/custom-flannel/DNS 0.17
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
382 TestNetworkPlugins/group/flannel/Start 53.31
383 TestNetworkPlugins/group/bridge/Start 78.04
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
386 TestNetworkPlugins/group/flannel/NetCatPod 11.29
387 TestNetworkPlugins/group/flannel/DNS 0.21
388 TestNetworkPlugins/group/flannel/Localhost 0.16
389 TestNetworkPlugins/group/flannel/HairPin 0.16
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
391 TestNetworkPlugins/group/bridge/NetCatPod 12.36
392 TestNetworkPlugins/group/bridge/DNS 0.16
393 TestNetworkPlugins/group/bridge/Localhost 0.14
394 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-550167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-550167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.58367279s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1011 20:57:55.344836  282920 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1011 20:57:55.344921  282920 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-550167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-550167: exit status 85 (73.804435ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-550167 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |          |
	|         | -p download-only-550167        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:57:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:57:47.810764  282925 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:57:47.810977  282925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:47.810996  282925 out.go:358] Setting ErrFile to fd 2...
	I1011 20:57:47.811003  282925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:47.811266  282925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	W1011 20:57:47.811410  282925 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19749-277533/.minikube/config/config.json: open /home/jenkins/minikube-integration/19749-277533/.minikube/config/config.json: no such file or directory
	I1011 20:57:47.811848  282925 out.go:352] Setting JSON to true
	I1011 20:57:47.812755  282925 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9611,"bootTime":1728670657,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 20:57:47.812833  282925 start.go:139] virtualization:  
	I1011 20:57:47.816792  282925 out.go:97] [download-only-550167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1011 20:57:47.816935  282925 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball: no such file or directory
	I1011 20:57:47.816967  282925 notify.go:220] Checking for updates...
	I1011 20:57:47.819844  282925 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:57:47.822829  282925 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:57:47.825611  282925 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:57:47.828427  282925 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 20:57:47.830947  282925 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1011 20:57:47.838024  282925 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:57:47.838332  282925 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:57:47.872849  282925 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:57:47.872967  282925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:47.926727  282925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 20:57:47.917132533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:47.926838  282925 docker.go:318] overlay module found
	I1011 20:57:47.929355  282925 out.go:97] Using the docker driver based on user configuration
	I1011 20:57:47.929379  282925 start.go:297] selected driver: docker
	I1011 20:57:47.929386  282925 start.go:901] validating driver "docker" against <nil>
	I1011 20:57:47.929483  282925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:47.987598  282925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 20:57:47.97778602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:47.987845  282925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:57:47.988219  282925 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1011 20:57:47.988384  282925 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:57:47.990973  282925 out.go:169] Using Docker driver with root privileges
	I1011 20:57:47.993424  282925 cni.go:84] Creating CNI manager for ""
	I1011 20:57:47.993497  282925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:57:47.993511  282925 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:57:47.993606  282925 start.go:340] cluster config:
	{Name:download-only-550167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-550167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:57:47.995994  282925 out.go:97] Starting "download-only-550167" primary control-plane node in "download-only-550167" cluster
	I1011 20:57:47.996046  282925 cache.go:121] Beginning downloading kic base image for docker with crio
	I1011 20:57:47.998431  282925 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:57:47.998478  282925 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 20:57:47.998588  282925 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:57:48.015690  282925 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:48.016256  282925 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:57:48.016363  282925 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:48.057450  282925 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1011 20:57:48.057480  282925 cache.go:56] Caching tarball of preloaded images
	I1011 20:57:48.057643  282925 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 20:57:48.060661  282925 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1011 20:57:48.060698  282925 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1011 20:57:48.154410  282925 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-550167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-550167
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-455194 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-455194 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.236828423s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1011 20:58:01.997630  282920 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1011 20:58:01.997666  282920 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-455194
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-455194: exit status 85 (74.2875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-550167 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-550167        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| delete  | -p download-only-550167        | download-only-550167 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| start   | -o=json --download-only        | download-only-455194 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-455194        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:57:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:57:55.808684  283123 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:57:55.808866  283123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:55.808879  283123 out.go:358] Setting ErrFile to fd 2...
	I1011 20:57:55.808885  283123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:55.809143  283123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 20:57:55.809583  283123 out.go:352] Setting JSON to true
	I1011 20:57:55.810447  283123 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9619,"bootTime":1728670657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 20:57:55.810521  283123 start.go:139] virtualization:  
	I1011 20:57:55.812492  283123 out.go:97] [download-only-455194] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 20:57:55.812692  283123 notify.go:220] Checking for updates...
	I1011 20:57:55.813835  283123 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:57:55.815243  283123 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:57:55.816665  283123 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 20:57:55.817935  283123 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 20:57:55.819099  283123 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1011 20:57:55.821585  283123 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:57:55.821924  283123 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:57:55.842244  283123 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:57:55.842382  283123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:55.892533  283123 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-11 20:57:55.878595913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:55.892677  283123 docker.go:318] overlay module found
	I1011 20:57:55.894044  283123 out.go:97] Using the docker driver based on user configuration
	I1011 20:57:55.894070  283123 start.go:297] selected driver: docker
	I1011 20:57:55.894077  283123 start.go:901] validating driver "docker" against <nil>
	I1011 20:57:55.894182  283123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:55.948127  283123 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-11 20:57:55.938870959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:55.948370  283123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:57:55.948683  283123 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1011 20:57:55.948845  283123 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:57:55.950580  283123 out.go:169] Using Docker driver with root privileges
	I1011 20:57:55.951740  283123 cni.go:84] Creating CNI manager for ""
	I1011 20:57:55.951806  283123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1011 20:57:55.951819  283123 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:57:55.951892  283123 start.go:340] cluster config:
	{Name:download-only-455194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-455194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:57:55.953281  283123 out.go:97] Starting "download-only-455194" primary control-plane node in "download-only-455194" cluster
	I1011 20:57:55.953309  283123 cache.go:121] Beginning downloading kic base image for docker with crio
	I1011 20:57:55.954480  283123 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:57:55.954503  283123 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:57:55.954605  283123 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:57:55.969222  283123 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:55.969333  283123 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:57:55.969359  283123 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1011 20:57:55.969368  283123 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1011 20:57:55.969375  283123 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1011 20:57:56.011074  283123 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1011 20:57:56.011105  283123 cache.go:56] Caching tarball of preloaded images
	I1011 20:57:56.011700  283123 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:57:56.013234  283123 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1011 20:57:56.013263  283123 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1011 20:57:56.095825  283123 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1011 20:58:00.435461  283123 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1011 20:58:00.435568  283123 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19749-277533/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-455194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-455194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-455194
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I1011 20:58:03.201328  282920 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-919124 --alsologtostderr --binary-mirror http://127.0.0.1:45157 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-919124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-919124
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-627736
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-627736: exit status 85 (80.58674ms)

                                                
                                                
-- stdout --
	* Profile "addons-627736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-627736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-627736
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-627736: exit status 85 (68.781271ms)

                                                
                                                
-- stdout --
	* Profile "addons-627736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-627736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (215.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-627736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-627736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m35.839965019s)
--- PASS: TestAddons/Setup (215.84s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-627736 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-627736 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (10.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-627736 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-627736 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2115b9a9-9453-4653-8e86-138868c46b76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2115b9a9-9453-4653-8e86-138868c46b76] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 10.004315949s
addons_test.go:633: (dbg) Run:  kubectl --context addons-627736 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-627736 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-627736 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-627736 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.627128ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-p6l9v" [0674412c-ee63-4347-b013-fcbb85bd1f6a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004677705s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hxsb7" [9f05d6fb-3f2f-4840-a6f5-392af1bf7e10] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004450037s
addons_test.go:331: (dbg) Run:  kubectl --context addons-627736 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-627736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-627736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.431820266s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 ip
2024/10/11 21:02:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8j4dt" [7d97ff5d-8c7b-4ec8-ab5b-62fdad01a2f4] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003870256s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable inspektor-gadget --alsologtostderr -v=1: (5.690405711s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1011 21:02:28.030895  282920 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1011 21:02:28.047277  282920 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1011 21:02:28.047426  282920 kapi.go:107] duration metric: took 16.54638ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 16.600868ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-627736 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-627736 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7a69e595-6211-49ab-9c79-0c0d42f01445] Pending
helpers_test.go:344: "task-pv-pod" [7a69e595-6211-49ab-9c79-0c0d42f01445] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7a69e595-6211-49ab-9c79-0c0d42f01445] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003997977s
addons_test.go:511: (dbg) Run:  kubectl --context addons-627736 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-627736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-627736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-627736 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-627736 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-627736 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-627736 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1d73d694-9848-4ba6-add4-2cc05e3359dc] Pending
helpers_test.go:344: "task-pv-pod-restore" [1d73d694-9848-4ba6-add4-2cc05e3359dc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1d73d694-9848-4ba6-add4-2cc05e3359dc] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003703309s
addons_test.go:553: (dbg) Run:  kubectl --context addons-627736 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-627736 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-627736 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable volumesnapshots --alsologtostderr -v=1: (1.008911553s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.765448087s)
--- PASS: TestAddons/parallel/CSI (58.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-627736 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-zpvj2" [41061d06-ee04-4cda-b0c2-f1aa312da37e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-zpvj2" [41061d06-ee04-4cda-b0c2-f1aa312da37e] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005593576s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable headlamp --alsologtostderr -v=1: (5.834212959s)
--- PASS: TestAddons/parallel/Headlamp (17.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-bxsnj" [36901558-8a84-4411-959a-2d628bffc4af] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005338797s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-627736 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-627736 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5e0158f3-b58d-413e-b9f2-d6714166bcbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5e0158f3-b58d-413e-b9f2-d6714166bcbe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5e0158f3-b58d-413e-b9f2-d6714166bcbe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00631018s
addons_test.go:902: (dbg) Run:  kubectl --context addons-627736 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 ssh "cat /opt/local-path-provisioner/pvc-1c41c8d6-e192-4aab-96f5-793834495bbd_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-627736 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-627736 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p9nsd" [41af943b-e0c9-4974-aa28-297cadfc3d28] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005187819s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dbdmt" [337d4f54-4643-4c10-8bd7-7ccf101b8a40] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003829351s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-627736 addons disable yakd --alsologtostderr -v=1: (5.79012612s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-627736
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-627736: (11.895916271s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-627736
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-627736
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-627736
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (39.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-498331 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-498331 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.693638037s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-498331 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-498331 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-498331 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-498331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-498331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-498331: (2.136034215s)
--- PASS: TestCertOptions (39.64s)

                                                
                                    
x
+
TestCertExpiration (259.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-776438 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-776438 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (44.275605705s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-776438 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-776438 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.473781614s)
helpers_test.go:175: Cleaning up "cert-expiration-776438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-776438
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-776438: (2.261162221s)
--- PASS: TestCertExpiration (259.01s)

                                                
                                    
x
+
TestForceSystemdFlag (32.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-196181 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-196181 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.052655838s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-196181 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-196181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-196181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-196181: (2.453355103s)
--- PASS: TestForceSystemdFlag (32.79s)

                                                
                                    
x
+
TestForceSystemdEnv (37.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-120671 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-120671 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.845404495s)
helpers_test.go:175: Cleaning up "force-systemd-env-120671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-120671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-120671: (2.373988355s)
--- PASS: TestForceSystemdEnv (37.22s)

                                                
                                    
x
+
TestErrorSpam/setup (32.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-534062 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-534062 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-534062 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-534062 --driver=docker  --container-runtime=crio: (32.142632296s)
--- PASS: TestErrorSpam/setup (32.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 stop: (1.252706538s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-534062 --log_dir /tmp/nospam-534062 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19749-277533/.minikube/files/etc/test/nested/copy/282920/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-824457 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.182432564s)
--- PASS: TestFunctional/serial/StartWithProxy (48.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1011 21:10:17.327814  282920 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-824457 --alsologtostderr -v=8: (29.871142694s)
functional_test.go:663: soft start took 29.871701643s for "functional-824457" cluster.
I1011 21:10:47.199258  282920 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-824457 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:3.1: (1.517112753s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:3.3: (1.534179552s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 cache add registry.k8s.io/pause:latest: (1.308490508s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-824457 /tmp/TestFunctionalserialCacheCmdcacheadd_local4067276409/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache add minikube-local-cache-test:functional-824457
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache delete minikube-local-cache-test:functional-824457
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-824457
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.121425ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 cache reload: (1.269516168s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 kubectl -- --context functional-824457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-824457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-824457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.071802538s)
functional_test.go:761: restart took 36.07194271s for "functional-824457" cluster.
I1011 21:11:32.208325  282920 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-824457 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 logs: (1.700155922s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 logs --file /tmp/TestFunctionalserialLogsFileCmd2446782764/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 logs --file /tmp/TestFunctionalserialLogsFileCmd2446782764/001/logs.txt: (1.739425881s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-824457 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-824457
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-824457: exit status 115 (502.548931ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31072 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-824457 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 config get cpus: exit status 14 (86.492268ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config unset cpus
E1011 21:11:40.499747  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:11:40.523062  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:11:40.564397  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 config get cpus
E1011 21:11:40.645989  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 config get cpus: exit status 14 (89.175475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-824457 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-824457 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 310713: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-824457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.772452ms)

                                                
                                                
-- stdout --
	* [functional-824457] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:12:14.420513  310423 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:12:14.420675  310423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:12:14.420689  310423 out.go:358] Setting ErrFile to fd 2...
	I1011 21:12:14.420698  310423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:12:14.420952  310423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:12:14.421332  310423 out.go:352] Setting JSON to false
	I1011 21:12:14.422283  310423 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10478,"bootTime":1728670657,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 21:12:14.422357  310423 start.go:139] virtualization:  
	I1011 21:12:14.427139  310423 out.go:177] * [functional-824457] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:12:14.429856  310423 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:12:14.429921  310423 notify.go:220] Checking for updates...
	I1011 21:12:14.434926  310423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:12:14.439961  310423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 21:12:14.442407  310423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 21:12:14.445084  310423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:12:14.447631  310423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:12:14.450708  310423 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:12:14.451296  310423 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:12:14.482367  310423 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:12:14.482490  310423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:12:14.540090  310423 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:12:14.52980767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:12:14.540203  310423 docker.go:318] overlay module found
	I1011 21:12:14.544891  310423 out.go:177] * Using the docker driver based on existing profile
	I1011 21:12:14.547448  310423 start.go:297] selected driver: docker
	I1011 21:12:14.547468  310423 start.go:901] validating driver "docker" against &{Name:functional-824457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824457 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:12:14.547579  310423 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:12:14.550761  310423 out.go:201] 
	W1011 21:12:14.553355  310423 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1011 21:12:14.555959  310423 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-824457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-824457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.704445ms)

                                                
                                                
-- stdout --
	* [functional-824457] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:12:14.237331  310379 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:12:14.237454  310379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:12:14.237465  310379 out.go:358] Setting ErrFile to fd 2...
	I1011 21:12:14.237471  310379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:12:14.237821  310379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:12:14.238201  310379 out.go:352] Setting JSON to false
	I1011 21:12:14.239156  310379 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10478,"bootTime":1728670657,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 21:12:14.239232  310379 start.go:139] virtualization:  
	I1011 21:12:14.242620  310379 out.go:177] * [functional-824457] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1011 21:12:14.245234  310379 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:12:14.245310  310379 notify.go:220] Checking for updates...
	I1011 21:12:14.250702  310379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:12:14.253571  310379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 21:12:14.255969  310379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 21:12:14.258484  310379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:12:14.260881  310379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:12:14.264251  310379 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:12:14.264844  310379 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:12:14.296325  310379 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:12:14.296503  310379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:12:14.344389  310379 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:12:14.335206968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:12:14.344509  310379 docker.go:318] overlay module found
	I1011 21:12:14.347207  310379 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1011 21:12:14.349772  310379 start.go:297] selected driver: docker
	I1011 21:12:14.349794  310379 start.go:901] validating driver "docker" against &{Name:functional-824457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824457 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:12:14.349911  310379 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:12:14.353019  310379 out.go:201] 
	W1011 21:12:14.355706  310379 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 21:12:14.358301  310379 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-824457 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-824457 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-kz848" [026861e4-6c7f-47c1-b9c5-e83e55183bb7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-kz848" [026861e4-6c7f-47c1-b9c5-e83e55183bb7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003949071s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31258
functional_test.go:1675: http://192.168.49.2:31258: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-kz848

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31258
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [57fe62d7-6879-4508-9d0d-0efa92e83fe1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004431183s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-824457 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-824457 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-824457 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-824457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68247790-87fa-4333-b6f1-870d45142d07] Pending
helpers_test.go:344: "sp-pod" [68247790-87fa-4333-b6f1-870d45142d07] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1011 21:11:50.740075  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [68247790-87fa-4333-b6f1-870d45142d07] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003808289s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-824457 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-824457 delete -f testdata/storage-provisioner/pod.yaml
E1011 21:12:00.982043  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-824457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [94644b03-51a7-4876-8a26-847eb3224fb9] Pending
helpers_test.go:344: "sp-pod" [94644b03-51a7-4876-8a26-847eb3224fb9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003834279s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-824457 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "echo hello"
E1011 21:11:40.808795  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh -n functional-824457 "sudo cat /home/docker/cp-test.txt"
E1011 21:11:40.481210  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:11:40.488153  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cp functional-824457:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd687818146/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh -n functional-824457 "sudo cat /home/docker/cp-test.txt"
E1011 21:11:41.130594  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E1011 21:11:41.773226  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh -n functional-824457 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/282920/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /etc/test/nested/copy/282920/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/282920.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /etc/ssl/certs/282920.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/282920.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /usr/share/ca-certificates/282920.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2829202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /etc/ssl/certs/2829202.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2829202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /usr/share/ca-certificates/2829202.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-824457 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh "sudo systemctl is-active docker": exit status 1 (384.936735ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh "sudo systemctl is-active containerd": exit status 1 (285.332847ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 308166: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-824457 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [64a7aa84-291d-4eba-ab6b-e29b802e23c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1011 21:11:43.056133  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:11:45.617894  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [64a7aa84-291d-4eba-ab6b-e29b802e23c5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004878581s
I1011 21:11:51.362863  282920 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-824457 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.119.108 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-824457 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-824457 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-824457 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-5c8fs" [eb34e269-43cd-49bf-bfc7-da20ae6f9108] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-5c8fs" [eb34e269-43cd-49bf-bfc7-da20ae6f9108] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008208923s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "380.800977ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "63.04694ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "350.030052ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "59.658466ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdany-port1311166766/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728681130529147844" to /tmp/TestFunctionalparallelMountCmdany-port1311166766/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728681130529147844" to /tmp/TestFunctionalparallelMountCmdany-port1311166766/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728681130529147844" to /tmp/TestFunctionalparallelMountCmdany-port1311166766/001/test-1728681130529147844
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.638305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:12:10.888056  282920 retry.go:31] will retry after 328.808676ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 11 21:12 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 11 21:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 11 21:12 test-1728681130529147844
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh cat /mount-9p/test-1728681130529147844
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-824457 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6fdd4ed5-8149-48c1-bdb3-454733ccd428] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6fdd4ed5-8149-48c1-bdb3-454733ccd428] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6fdd4ed5-8149-48c1-bdb3-454733ccd428] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004522862s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-824457 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdany-port1311166766/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service list -o json
functional_test.go:1494: Took "584.648755ms" to run "out/minikube-linux-arm64 -p functional-824457 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31725
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31725
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdspecific-port3472640442/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdspecific-port3472640442/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh "sudo umount -f /mount-9p": exit status 1 (413.554639ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-824457 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdspecific-port3472640442/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T" /mount1
E1011 21:12:21.463730  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T" /mount1: exit status 1 (991.136111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:12:22.301750  282920 retry.go:31] will retry after 664.912633ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T" /mount2
2024/10/11 21:12:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-824457 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-824457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1144246622/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-824457 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-824457
localhost/kicbase/echo-server:functional-824457
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-824457 image ls --format short --alsologtostderr:
I1011 21:12:31.740738  313197 out.go:345] Setting OutFile to fd 1 ...
I1011 21:12:31.740939  313197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:31.740966  313197 out.go:358] Setting ErrFile to fd 2...
I1011 21:12:31.740984  313197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:31.741266  313197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
I1011 21:12:31.741946  313197 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:31.742137  313197 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:31.742731  313197 cli_runner.go:164] Run: docker container inspect functional-824457 --format={{.State.Status}}
I1011 21:12:31.765912  313197 ssh_runner.go:195] Run: systemctl --version
I1011 21:12:31.765967  313197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-824457
I1011 21:12:31.785236  313197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/functional-824457/id_rsa Username:docker}
I1011 21:12:31.875880  313197 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-824457 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-824457  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-824457  | 9a996362af21c | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-824457 image ls --format table --alsologtostderr:
I1011 21:12:32.033403  313267 out.go:345] Setting OutFile to fd 1 ...
I1011 21:12:32.033618  313267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.033648  313267 out.go:358] Setting ErrFile to fd 2...
I1011 21:12:32.033668  313267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.033925  313267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
I1011 21:12:32.034622  313267 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.034793  313267 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.035348  313267 cli_runner.go:164] Run: docker container inspect functional-824457 --format={{.State.Status}}
I1011 21:12:32.062272  313267 ssh_runner.go:195] Run: systemctl --version
I1011 21:12:32.062328  313267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-824457
I1011 21:12:32.086998  313267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/functional-824457/id_rsa Username:docker}
I1011 21:12:32.179653  313267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-824457 image ls --format json --alsologtostderr:
[{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags
":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"
},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","doc
ker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de29
2ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-824457"],"size":"4788229"},{"id":"9a996362af21c9f46605d11ec2f60f8891926ebbfd96112064e91c3ea3804ce2","repoDigests":["localhost/minikube-local-cache-test@sha256:8e75820befec6bf6c176e8ce2bba5331346139b8db06404c68647b66bd9c370d"],"repoTags":["localhost/minikube-local-cache-test:functional-824457"],"size":"3330"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-s
cheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e
645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-824457 image ls --format json --alsologtostderr:
I1011 21:12:32.022053  313262 out.go:345] Setting OutFile to fd 1 ...
I1011 21:12:32.022258  313262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.022270  313262 out.go:358] Setting ErrFile to fd 2...
I1011 21:12:32.022275  313262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.022569  313262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
I1011 21:12:32.023276  313262 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.023436  313262 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.023972  313262 cli_runner.go:164] Run: docker container inspect functional-824457 --format={{.State.Status}}
I1011 21:12:32.057586  313262 ssh_runner.go:195] Run: systemctl --version
I1011 21:12:32.057643  313262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-824457
I1011 21:12:32.079485  313262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/functional-824457/id_rsa Username:docker}
I1011 21:12:32.171239  313262 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-824457 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-824457
size: "4788229"
- id: 9a996362af21c9f46605d11ec2f60f8891926ebbfd96112064e91c3ea3804ce2
repoDigests:
- localhost/minikube-local-cache-test@sha256:8e75820befec6bf6c176e8ce2bba5331346139b8db06404c68647b66bd9c370d
repoTags:
- localhost/minikube-local-cache-test:functional-824457
size: "3330"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-824457 image ls --format yaml --alsologtostderr:
I1011 21:12:31.720768  313198 out.go:345] Setting OutFile to fd 1 ...
I1011 21:12:31.720993  313198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:31.721020  313198 out.go:358] Setting ErrFile to fd 2...
I1011 21:12:31.721045  313198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:31.721439  313198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
I1011 21:12:31.722488  313198 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:31.722708  313198 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:31.724124  313198 cli_runner.go:164] Run: docker container inspect functional-824457 --format={{.State.Status}}
I1011 21:12:31.753088  313198 ssh_runner.go:195] Run: systemctl --version
I1011 21:12:31.753145  313198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-824457
I1011 21:12:31.780536  313198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/functional-824457/id_rsa Username:docker}
I1011 21:12:31.876051  313198 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-824457 ssh pgrep buildkitd: exit status 1 (263.866728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image build -t localhost/my-image:functional-824457 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 image build -t localhost/my-image:functional-824457 testdata/build --alsologtostderr: (3.134035053s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-824457 image build -t localhost/my-image:functional-824457 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f5a325f8bb3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-824457
--> e81adcd7db6
Successfully tagged localhost/my-image:functional-824457
e81adcd7db6242ff20a887812fc7c53290a60ac99fef6401cdb84393d094d84a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-824457 image build -t localhost/my-image:functional-824457 testdata/build --alsologtostderr:
I1011 21:12:32.544542  313384 out.go:345] Setting OutFile to fd 1 ...
I1011 21:12:32.545165  313384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.545178  313384 out.go:358] Setting ErrFile to fd 2...
I1011 21:12:32.545184  313384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:12:32.545436  313384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
I1011 21:12:32.546103  313384 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.546689  313384 config.go:182] Loaded profile config "functional-824457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:12:32.547330  313384 cli_runner.go:164] Run: docker container inspect functional-824457 --format={{.State.Status}}
I1011 21:12:32.564053  313384 ssh_runner.go:195] Run: systemctl --version
I1011 21:12:32.564110  313384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-824457
I1011 21:12:32.588342  313384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/functional-824457/id_rsa Username:docker}
I1011 21:12:32.683418  313384 build_images.go:161] Building image from path: /tmp/build.1842058744.tar
I1011 21:12:32.683508  313384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1011 21:12:32.692373  313384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1842058744.tar
I1011 21:12:32.695833  313384 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1842058744.tar: stat -c "%s %y" /var/lib/minikube/build/build.1842058744.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1842058744.tar': No such file or directory
I1011 21:12:32.695866  313384 ssh_runner.go:362] scp /tmp/build.1842058744.tar --> /var/lib/minikube/build/build.1842058744.tar (3072 bytes)
I1011 21:12:32.721908  313384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1842058744
I1011 21:12:32.730999  313384 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1842058744 -xf /var/lib/minikube/build/build.1842058744.tar
I1011 21:12:32.740457  313384 crio.go:315] Building image: /var/lib/minikube/build/build.1842058744
I1011 21:12:32.740542  313384 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-824457 /var/lib/minikube/build/build.1842058744 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1011 21:12:35.598488  313384 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-824457 /var/lib/minikube/build/build.1842058744 --cgroup-manager=cgroupfs: (2.857917336s)
I1011 21:12:35.598575  313384 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1842058744
I1011 21:12:35.608105  313384 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1842058744.tar
I1011 21:12:35.617116  313384 build_images.go:217] Built localhost/my-image:functional-824457 from /tmp/build.1842058744.tar
I1011 21:12:35.617144  313384 build_images.go:133] succeeded building to: functional-824457
I1011 21:12:35.617150  313384 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-824457
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image load --daemon kicbase/echo-server:functional-824457 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-824457 image load --daemon kicbase/echo-server:functional-824457 --alsologtostderr: (1.301522902s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image load --daemon kicbase/echo-server:functional-824457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-824457
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image load --daemon kicbase/echo-server:functional-824457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image save kicbase/echo-server:functional-824457 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image rm kicbase/echo-server:functional-824457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-824457
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-824457 image save --daemon kicbase/echo-server:functional-824457 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-824457
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-824457
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-824457
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-824457
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (173.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-320408 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1011 21:13:02.425692  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:14:24.347002  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-320408 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m52.473866754s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (173.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-320408 -- rollout status deployment/busybox: (5.772086441s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-bm8nz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-sfxn9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-t9j6b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-bm8nz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-sfxn9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-t9j6b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-bm8nz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-sfxn9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-t9j6b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-bm8nz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-bm8nz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-sfxn9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-sfxn9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-t9j6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-320408 -- exec busybox-7dff88458-t9j6b -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-320408 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-320408 -v=7 --alsologtostderr: (43.303677677s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-320408 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp testdata/cp-test.txt ha-320408:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656533823/001/cp-test_ha-320408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408:/home/docker/cp-test.txt ha-320408-m02:/home/docker/cp-test_ha-320408_ha-320408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test_ha-320408_ha-320408-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408:/home/docker/cp-test.txt ha-320408-m03:/home/docker/cp-test_ha-320408_ha-320408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test_ha-320408_ha-320408-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408:/home/docker/cp-test.txt ha-320408-m04:/home/docker/cp-test_ha-320408_ha-320408-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test_ha-320408_ha-320408-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp testdata/cp-test.txt ha-320408-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656533823/001/cp-test_ha-320408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m02:/home/docker/cp-test.txt ha-320408:/home/docker/cp-test_ha-320408-m02_ha-320408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test_ha-320408-m02_ha-320408.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m02:/home/docker/cp-test.txt ha-320408-m03:/home/docker/cp-test_ha-320408-m02_ha-320408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test_ha-320408-m02_ha-320408-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m02:/home/docker/cp-test.txt ha-320408-m04:/home/docker/cp-test_ha-320408-m02_ha-320408-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test_ha-320408-m02_ha-320408-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp testdata/cp-test.txt ha-320408-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656533823/001/cp-test_ha-320408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m03:/home/docker/cp-test.txt ha-320408:/home/docker/cp-test_ha-320408-m03_ha-320408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test_ha-320408-m03_ha-320408.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m03:/home/docker/cp-test.txt ha-320408-m02:/home/docker/cp-test_ha-320408-m03_ha-320408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test.txt"
E1011 21:16:40.480435  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test_ha-320408-m03_ha-320408-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m03:/home/docker/cp-test.txt ha-320408-m04:/home/docker/cp-test_ha-320408-m03_ha-320408-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test_ha-320408-m03_ha-320408-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp testdata/cp-test.txt ha-320408-m04:/home/docker/cp-test.txt
E1011 21:16:41.937735  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:41.945931  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:41.958107  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:41.979570  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:42.024298  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:42.105640  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test.txt"
E1011 21:16:42.266968  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656533823/001/cp-test_ha-320408-m04.txt
E1011 21:16:42.588961  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m04:/home/docker/cp-test.txt ha-320408:/home/docker/cp-test_ha-320408-m04_ha-320408.txt
E1011 21:16:43.231761  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408 "sudo cat /home/docker/cp-test_ha-320408-m04_ha-320408.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m04:/home/docker/cp-test.txt ha-320408-m02:/home/docker/cp-test_ha-320408-m04_ha-320408-m02.txt
E1011 21:16:44.513669  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m02 "sudo cat /home/docker/cp-test_ha-320408-m04_ha-320408-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 cp ha-320408-m04:/home/docker/cp-test.txt ha-320408-m03:/home/docker/cp-test_ha-320408-m04_ha-320408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 ssh -n ha-320408-m03 "sudo cat /home/docker/cp-test_ha-320408-m04_ha-320408-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 node stop m02 -v=7 --alsologtostderr
E1011 21:16:47.075471  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:52.196735  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 node stop m02 -v=7 --alsologtostderr: (12.016437692s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr: exit status 7 (732.615382ms)

                                                
                                                
-- stdout --
	ha-320408
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-320408-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-320408-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-320408-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:16:58.398231  329183 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:16:58.398389  329183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:58.398399  329183 out.go:358] Setting ErrFile to fd 2...
	I1011 21:16:58.398404  329183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:58.398643  329183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:16:58.398819  329183 out.go:352] Setting JSON to false
	I1011 21:16:58.398906  329183 mustload.go:65] Loading cluster: ha-320408
	I1011 21:16:58.398990  329183 notify.go:220] Checking for updates...
	I1011 21:16:58.399352  329183 config.go:182] Loaded profile config "ha-320408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:16:58.399375  329183 status.go:174] checking status of ha-320408 ...
	I1011 21:16:58.399992  329183 cli_runner.go:164] Run: docker container inspect ha-320408 --format={{.State.Status}}
	I1011 21:16:58.422761  329183 status.go:371] ha-320408 host status = "Running" (err=<nil>)
	I1011 21:16:58.422786  329183 host.go:66] Checking if "ha-320408" exists ...
	I1011 21:16:58.423156  329183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-320408
	I1011 21:16:58.457759  329183 host.go:66] Checking if "ha-320408" exists ...
	I1011 21:16:58.458077  329183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:16:58.458119  329183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-320408
	I1011 21:16:58.479494  329183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/ha-320408/id_rsa Username:docker}
	I1011 21:16:58.582517  329183 ssh_runner.go:195] Run: systemctl --version
	I1011 21:16:58.586934  329183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:16:58.599345  329183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:16:58.652757  329183 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-11 21:16:58.642094128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:16:58.653348  329183 kubeconfig.go:125] found "ha-320408" server: "https://192.168.49.254:8443"
	I1011 21:16:58.653385  329183 api_server.go:166] Checking apiserver status ...
	I1011 21:16:58.653432  329183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:16:58.664560  329183 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I1011 21:16:58.674144  329183 api_server.go:182] apiserver freezer: "13:freezer:/docker/24e5c9788e9ac31c2c33d3cf7cb14ee300ec0df4f448094d9f7ae9027390467b/crio/crio-eb6937fe9acd24bbe122756a41ed4a8dbe6b7aa11f53cec364da7ee79d5d6ee5"
	I1011 21:16:58.674215  329183 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24e5c9788e9ac31c2c33d3cf7cb14ee300ec0df4f448094d9f7ae9027390467b/crio/crio-eb6937fe9acd24bbe122756a41ed4a8dbe6b7aa11f53cec364da7ee79d5d6ee5/freezer.state
	I1011 21:16:58.682980  329183 api_server.go:204] freezer state: "THAWED"
	I1011 21:16:58.683011  329183 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1011 21:16:58.692255  329183 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1011 21:16:58.692286  329183 status.go:463] ha-320408 apiserver status = Running (err=<nil>)
	I1011 21:16:58.692296  329183 status.go:176] ha-320408 status: &{Name:ha-320408 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:16:58.692313  329183 status.go:174] checking status of ha-320408-m02 ...
	I1011 21:16:58.692633  329183 cli_runner.go:164] Run: docker container inspect ha-320408-m02 --format={{.State.Status}}
	I1011 21:16:58.709017  329183 status.go:371] ha-320408-m02 host status = "Stopped" (err=<nil>)
	I1011 21:16:58.709041  329183 status.go:384] host is not running, skipping remaining checks
	I1011 21:16:58.709050  329183 status.go:176] ha-320408-m02 status: &{Name:ha-320408-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:16:58.709072  329183 status.go:174] checking status of ha-320408-m03 ...
	I1011 21:16:58.709394  329183 cli_runner.go:164] Run: docker container inspect ha-320408-m03 --format={{.State.Status}}
	I1011 21:16:58.727079  329183 status.go:371] ha-320408-m03 host status = "Running" (err=<nil>)
	I1011 21:16:58.727104  329183 host.go:66] Checking if "ha-320408-m03" exists ...
	I1011 21:16:58.727424  329183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-320408-m03
	I1011 21:16:58.743115  329183 host.go:66] Checking if "ha-320408-m03" exists ...
	I1011 21:16:58.743430  329183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:16:58.743476  329183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-320408-m03
	I1011 21:16:58.759432  329183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/ha-320408-m03/id_rsa Username:docker}
	I1011 21:16:58.848687  329183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:16:58.860521  329183 kubeconfig.go:125] found "ha-320408" server: "https://192.168.49.254:8443"
	I1011 21:16:58.860552  329183 api_server.go:166] Checking apiserver status ...
	I1011 21:16:58.860600  329183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:16:58.872271  329183 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup
	I1011 21:16:58.881985  329183 api_server.go:182] apiserver freezer: "13:freezer:/docker/9f2e75cab04545e50fa174cb9642b0f77a21db9f3725af9b154f4b6dcd6ce830/crio/crio-ce106c7ea49545b7b42be0bff5cacdfc6b864e9fb95bce249036f68d3a441f91"
	I1011 21:16:58.882071  329183 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9f2e75cab04545e50fa174cb9642b0f77a21db9f3725af9b154f4b6dcd6ce830/crio/crio-ce106c7ea49545b7b42be0bff5cacdfc6b864e9fb95bce249036f68d3a441f91/freezer.state
	I1011 21:16:58.890947  329183 api_server.go:204] freezer state: "THAWED"
	I1011 21:16:58.890979  329183 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1011 21:16:58.899160  329183 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1011 21:16:58.899202  329183 status.go:463] ha-320408-m03 apiserver status = Running (err=<nil>)
	I1011 21:16:58.899212  329183 status.go:176] ha-320408-m03 status: &{Name:ha-320408-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:16:58.899232  329183 status.go:174] checking status of ha-320408-m04 ...
	I1011 21:16:58.899615  329183 cli_runner.go:164] Run: docker container inspect ha-320408-m04 --format={{.State.Status}}
	I1011 21:16:58.916348  329183 status.go:371] ha-320408-m04 host status = "Running" (err=<nil>)
	I1011 21:16:58.916377  329183 host.go:66] Checking if "ha-320408-m04" exists ...
	I1011 21:16:58.916726  329183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-320408-m04
	I1011 21:16:58.948619  329183 host.go:66] Checking if "ha-320408-m04" exists ...
	I1011 21:16:58.948943  329183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:16:58.948983  329183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-320408-m04
	I1011 21:16:58.967099  329183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/ha-320408-m04/id_rsa Username:docker}
	I1011 21:16:59.056228  329183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:16:59.070257  329183 status.go:176] ha-320408-m04 status: &{Name:ha-320408-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 node start m02 -v=7 --alsologtostderr
E1011 21:17:02.439007  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:17:08.188638  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 node start m02 -v=7 --alsologtostderr: (21.601980542s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr: (1.410600349s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
E1011 21:17:22.920827  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.320410154s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-320408 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-320408 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-320408 -v=7 --alsologtostderr: (37.396791811s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-320408 --wait=true -v=7 --alsologtostderr
E1011 21:18:03.882163  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:19:25.803954  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-320408 --wait=true -v=7 --alsologtostderr: (2m33.607046938s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-320408
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 node delete m03 -v=7 --alsologtostderr: (11.625758863s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 stop -v=7 --alsologtostderr: (35.611998548s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr: exit status 7 (119.04559ms)

                                                
                                                
-- stdout --
	ha-320408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-320408-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-320408-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:21:24.451781  343237 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:21:24.451917  343237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:21:24.451928  343237 out.go:358] Setting ErrFile to fd 2...
	I1011 21:21:24.451933  343237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:21:24.452194  343237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:21:24.452385  343237 out.go:352] Setting JSON to false
	I1011 21:21:24.452428  343237 mustload.go:65] Loading cluster: ha-320408
	I1011 21:21:24.452511  343237 notify.go:220] Checking for updates...
	I1011 21:21:24.452886  343237 config.go:182] Loaded profile config "ha-320408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:21:24.452908  343237 status.go:174] checking status of ha-320408 ...
	I1011 21:21:24.453405  343237 cli_runner.go:164] Run: docker container inspect ha-320408 --format={{.State.Status}}
	I1011 21:21:24.471627  343237 status.go:371] ha-320408 host status = "Stopped" (err=<nil>)
	I1011 21:21:24.471652  343237 status.go:384] host is not running, skipping remaining checks
	I1011 21:21:24.471658  343237 status.go:176] ha-320408 status: &{Name:ha-320408 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:21:24.471687  343237 status.go:174] checking status of ha-320408-m02 ...
	I1011 21:21:24.472003  343237 cli_runner.go:164] Run: docker container inspect ha-320408-m02 --format={{.State.Status}}
	I1011 21:21:24.502183  343237 status.go:371] ha-320408-m02 host status = "Stopped" (err=<nil>)
	I1011 21:21:24.502204  343237 status.go:384] host is not running, skipping remaining checks
	I1011 21:21:24.502212  343237 status.go:176] ha-320408-m02 status: &{Name:ha-320408-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:21:24.502232  343237 status.go:174] checking status of ha-320408-m04 ...
	I1011 21:21:24.502544  343237 cli_runner.go:164] Run: docker container inspect ha-320408-m04 --format={{.State.Status}}
	I1011 21:21:24.518415  343237 status.go:371] ha-320408-m04 host status = "Stopped" (err=<nil>)
	I1011 21:21:24.518435  343237 status.go:384] host is not running, skipping remaining checks
	I1011 21:21:24.518442  343237 status.go:176] ha-320408-m04 status: &{Name:ha-320408-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-320408 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1011 21:21:40.480544  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:21:41.937584  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:22:09.645585  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-320408 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.366580202s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-320408 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-320408 --control-plane -v=7 --alsologtostderr: (1m14.691147705s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-320408 status -v=7 --alsologtostderr: (1.088684504s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-281126 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-281126 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (50.477786187s)
--- PASS: TestJSONOutput/start/Command (50.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-281126 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-281126 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-281126 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-281126 --output=json --user=testUser: (5.924539425s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-223192 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-223192 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.458564ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"adfc6654-46b5-4095-9c05-bb7e09fe70ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-223192] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8412d2e4-fc5c-4429-94cc-8cca23fed86b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"af8d472e-11a6-4cf8-8010-79e56022663b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6cb32fef-11a9-4126-987b-fe694c2f45de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig"}}
	{"specversion":"1.0","id":"29602cea-4e0e-4561-a09f-de639668b7ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube"}}
	{"specversion":"1.0","id":"d4b4d17e-1202-40b9-8410-7445c8f96e75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"16ce162d-b10e-478c-863a-d54c15cf5796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c3564b0-64c2-4aca-b4ad-60423f622628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-223192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-223192
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-158722 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-158722 --network=: (36.928144796s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-158722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-158722
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-158722: (2.0339555s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-453598 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-453598 --network=bridge: (29.734484396s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-453598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-453598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-453598: (1.936750009s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.69s)

                                                
                                    
x
+
TestKicExistingNetwork (31.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1011 21:26:21.974488  282920 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1011 21:26:21.990101  282920 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1011 21:26:21.990184  282920 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1011 21:26:21.990201  282920 cli_runner.go:164] Run: docker network inspect existing-network
W1011 21:26:22.005150  282920 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1011 21:26:22.005186  282920 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1011 21:26:22.005204  282920 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1011 21:26:22.005316  282920 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1011 21:26:22.022283  282920 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7fcdd1d57e76 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b6:d6:cc:86} reservation:<nil>}
I1011 21:26:22.022670  282920 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001dfb360}
I1011 21:26:22.022701  282920 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1011 21:26:22.022753  282920 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1011 21:26:22.098005  282920 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-030612 --network=existing-network
E1011 21:26:40.480616  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:26:41.937438  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-030612 --network=existing-network: (29.612937784s)
helpers_test.go:175: Cleaning up "existing-network-030612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-030612
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-030612: (1.92858896s)
I1011 21:26:53.653475  282920 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.70s)

                                                
                                    
x
+
TestKicCustomSubnet (34.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-942882 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-942882 --subnet=192.168.60.0/24: (32.377569711s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-942882 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-942882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-942882
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-942882: (2.179114256s)
--- PASS: TestKicCustomSubnet (34.58s)

                                                
                                    
x
+
TestKicStaticIP (34.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-250533 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-250533 --static-ip=192.168.200.200: (32.545629785s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-250533 ip
helpers_test.go:175: Cleaning up "static-ip-250533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-250533
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-250533: (2.112371719s)
--- PASS: TestKicStaticIP (34.81s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-241210 --driver=docker  --container-runtime=crio
E1011 21:28:03.549910  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-241210 --driver=docker  --container-runtime=crio: (31.680331129s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-243778 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-243778 --driver=docker  --container-runtime=crio: (31.545066249s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-241210
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-243778
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-243778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-243778
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-243778: (1.983164354s)
helpers_test.go:175: Cleaning up "first-241210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-241210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-241210: (2.228181861s)
--- PASS: TestMinikubeProfile (68.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-710043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-710043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.849258002s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-710043 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-711971 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-711971 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.22797633s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-711971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-710043 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-710043 --alsologtostderr -v=5: (1.611422628s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-711971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-711971
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-711971: (1.201473148s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-711971
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-711971: (6.632462149s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-711971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-636315 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-636315 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m14.46623242s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-636315 -- rollout status deployment/busybox: (4.928738435s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-2vk5h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-wtjv4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-2vk5h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-wtjv4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-2vk5h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-wtjv4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-2vk5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-2vk5h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-wtjv4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-636315 -- exec busybox-7dff88458-wtjv4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-636315 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-636315 -v 3 --alsologtostderr: (29.584038083s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-636315 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp testdata/cp-test.txt multinode-636315:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551119311/001/cp-test_multinode-636315.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315:/home/docker/cp-test.txt multinode-636315-m02:/home/docker/cp-test_multinode-636315_multinode-636315-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test_multinode-636315_multinode-636315-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315:/home/docker/cp-test.txt multinode-636315-m03:/home/docker/cp-test_multinode-636315_multinode-636315-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test_multinode-636315_multinode-636315-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp testdata/cp-test.txt multinode-636315-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551119311/001/cp-test_multinode-636315-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m02:/home/docker/cp-test.txt multinode-636315:/home/docker/cp-test_multinode-636315-m02_multinode-636315.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test_multinode-636315-m02_multinode-636315.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m02:/home/docker/cp-test.txt multinode-636315-m03:/home/docker/cp-test_multinode-636315-m02_multinode-636315-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test_multinode-636315-m02_multinode-636315-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp testdata/cp-test.txt multinode-636315-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551119311/001/cp-test_multinode-636315-m03.txt
E1011 21:31:40.480422  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m03:/home/docker/cp-test.txt multinode-636315:/home/docker/cp-test_multinode-636315-m03_multinode-636315.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315 "sudo cat /home/docker/cp-test_multinode-636315-m03_multinode-636315.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 cp multinode-636315-m03:/home/docker/cp-test.txt multinode-636315-m02:/home/docker/cp-test_multinode-636315-m03_multinode-636315-m02.txt
E1011 21:31:41.937742  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 ssh -n multinode-636315-m02 "sudo cat /home/docker/cp-test_multinode-636315-m03_multinode-636315-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-636315 node stop m03: (1.22609672s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-636315 status: exit status 7 (494.755498ms)

                                                
                                                
-- stdout --
	multinode-636315
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-636315-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-636315-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr: exit status 7 (520.854762ms)

                                                
                                                
-- stdout --
	multinode-636315
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-636315-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-636315-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:31:44.517609  396268 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:31:44.517791  396268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:31:44.517821  396268 out.go:358] Setting ErrFile to fd 2...
	I1011 21:31:44.517841  396268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:31:44.518262  396268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:31:44.519071  396268 out.go:352] Setting JSON to false
	I1011 21:31:44.519129  396268 mustload.go:65] Loading cluster: multinode-636315
	I1011 21:31:44.519770  396268 config.go:182] Loaded profile config "multinode-636315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:31:44.519796  396268 status.go:174] checking status of multinode-636315 ...
	I1011 21:31:44.519845  396268 notify.go:220] Checking for updates...
	I1011 21:31:44.520671  396268 cli_runner.go:164] Run: docker container inspect multinode-636315 --format={{.State.Status}}
	I1011 21:31:44.539096  396268 status.go:371] multinode-636315 host status = "Running" (err=<nil>)
	I1011 21:31:44.539120  396268 host.go:66] Checking if "multinode-636315" exists ...
	I1011 21:31:44.539526  396268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-636315
	I1011 21:31:44.563094  396268 host.go:66] Checking if "multinode-636315" exists ...
	I1011 21:31:44.563563  396268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:31:44.563614  396268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636315
	I1011 21:31:44.584133  396268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/multinode-636315/id_rsa Username:docker}
	I1011 21:31:44.680368  396268 ssh_runner.go:195] Run: systemctl --version
	I1011 21:31:44.684798  396268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:31:44.696302  396268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:31:44.749405  396268 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-11 21:31:44.739697352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:31:44.750004  396268 kubeconfig.go:125] found "multinode-636315" server: "https://192.168.67.2:8443"
	I1011 21:31:44.750037  396268 api_server.go:166] Checking apiserver status ...
	I1011 21:31:44.750089  396268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:31:44.761128  396268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	I1011 21:31:44.770604  396268 api_server.go:182] apiserver freezer: "13:freezer:/docker/4920be61956657ce1482cfe6905d9dce273afe14c08b56d34defcdf31b4f5e0b/crio/crio-eb1668c24d93386ceea3ac34499475044252547d415907da6faaca1578dc54b8"
	I1011 21:31:44.770677  396268 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4920be61956657ce1482cfe6905d9dce273afe14c08b56d34defcdf31b4f5e0b/crio/crio-eb1668c24d93386ceea3ac34499475044252547d415907da6faaca1578dc54b8/freezer.state
	I1011 21:31:44.779746  396268 api_server.go:204] freezer state: "THAWED"
	I1011 21:31:44.779773  396268 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1011 21:31:44.787298  396268 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1011 21:31:44.787325  396268 status.go:463] multinode-636315 apiserver status = Running (err=<nil>)
	I1011 21:31:44.787337  396268 status.go:176] multinode-636315 status: &{Name:multinode-636315 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:31:44.787356  396268 status.go:174] checking status of multinode-636315-m02 ...
	I1011 21:31:44.787685  396268 cli_runner.go:164] Run: docker container inspect multinode-636315-m02 --format={{.State.Status}}
	I1011 21:31:44.805198  396268 status.go:371] multinode-636315-m02 host status = "Running" (err=<nil>)
	I1011 21:31:44.805221  396268 host.go:66] Checking if "multinode-636315-m02" exists ...
	I1011 21:31:44.805529  396268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-636315-m02
	I1011 21:31:44.822123  396268 host.go:66] Checking if "multinode-636315-m02" exists ...
	I1011 21:31:44.822438  396268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:31:44.822491  396268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636315-m02
	I1011 21:31:44.840521  396268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19749-277533/.minikube/machines/multinode-636315-m02/id_rsa Username:docker}
	I1011 21:31:44.936446  396268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:31:44.948407  396268 status.go:176] multinode-636315-m02 status: &{Name:multinode-636315-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:31:44.948445  396268 status.go:174] checking status of multinode-636315-m03 ...
	I1011 21:31:44.948778  396268 cli_runner.go:164] Run: docker container inspect multinode-636315-m03 --format={{.State.Status}}
	I1011 21:31:44.970990  396268 status.go:371] multinode-636315-m03 host status = "Stopped" (err=<nil>)
	I1011 21:31:44.971012  396268 status.go:384] host is not running, skipping remaining checks
	I1011 21:31:44.971020  396268 status.go:176] multinode-636315-m03 status: &{Name:multinode-636315-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-636315 node start m03 -v=7 --alsologtostderr: (8.96435944s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-636315
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-636315
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-636315: (24.85606846s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-636315 --wait=true -v=8 --alsologtostderr
E1011 21:33:05.007931  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-636315 --wait=true -v=8 --alsologtostderr: (1m16.401784464s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-636315
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-636315 node delete m03: (4.75811153s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-636315 stop: (23.631167125s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-636315 status: exit status 7 (89.093143ms)

                                                
                                                
-- stdout --
	multinode-636315
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-636315-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr: exit status 7 (95.826034ms)

                                                
                                                
-- stdout --
	multinode-636315
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-636315-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:34:05.256769  403983 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:34:05.256964  403983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:34:05.256997  403983 out.go:358] Setting ErrFile to fd 2...
	I1011 21:34:05.257016  403983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:34:05.257404  403983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:34:05.257674  403983 out.go:352] Setting JSON to false
	I1011 21:34:05.257756  403983 mustload.go:65] Loading cluster: multinode-636315
	I1011 21:34:05.258500  403983 config.go:182] Loaded profile config "multinode-636315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:34:05.258546  403983 status.go:174] checking status of multinode-636315 ...
	I1011 21:34:05.259360  403983 cli_runner.go:164] Run: docker container inspect multinode-636315 --format={{.State.Status}}
	I1011 21:34:05.260252  403983 notify.go:220] Checking for updates...
	I1011 21:34:05.277126  403983 status.go:371] multinode-636315 host status = "Stopped" (err=<nil>)
	I1011 21:34:05.277150  403983 status.go:384] host is not running, skipping remaining checks
	I1011 21:34:05.277157  403983 status.go:176] multinode-636315 status: &{Name:multinode-636315 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:34:05.277190  403983 status.go:174] checking status of multinode-636315-m02 ...
	I1011 21:34:05.277508  403983 cli_runner.go:164] Run: docker container inspect multinode-636315-m02 --format={{.State.Status}}
	I1011 21:34:05.294337  403983 status.go:371] multinode-636315-m02 host status = "Stopped" (err=<nil>)
	I1011 21:34:05.294372  403983 status.go:384] host is not running, skipping remaining checks
	I1011 21:34:05.294379  403983 status.go:176] multinode-636315-m02 status: &{Name:multinode-636315-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-636315 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-636315 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (47.404164855s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-636315 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-636315
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-636315-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-636315-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.319204ms)

                                                
                                                
-- stdout --
	* [multinode-636315-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-636315-m02' is duplicated with machine name 'multinode-636315-m02' in profile 'multinode-636315'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-636315-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-636315-m03 --driver=docker  --container-runtime=crio: (31.933924102s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-636315
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-636315: exit status 80 (308.610482ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-636315 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-636315-m03 already exists in multinode-636315-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-636315-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-636315-m03: (1.939077235s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.33s)

                                                
                                    
x
+
TestPreload (124.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-603177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1011 21:36:40.480584  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:36:41.936781  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-603177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.041055263s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-603177 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-603177 image pull gcr.io/k8s-minikube/busybox: (3.011465937s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-603177
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-603177: (5.838766287s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-603177 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-603177 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.141082898s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-603177 image list
helpers_test.go:175: Cleaning up "test-preload-603177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-603177
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-603177: (2.531106602s)
--- PASS: TestPreload (124.84s)

                                                
                                    
x
+
TestScheduledStopUnix (109.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-984176 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-984176 --memory=2048 --driver=docker  --container-runtime=crio: (33.58385965s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984176 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-984176 -n scheduled-stop-984176
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984176 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1011 21:38:10.671334  282920 retry.go:31] will retry after 102.263µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.672438  282920 retry.go:31] will retry after 88.059µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.673563  282920 retry.go:31] will retry after 173.592µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.674654  282920 retry.go:31] will retry after 475.453µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.676703  282920 retry.go:31] will retry after 672.023µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.677829  282920 retry.go:31] will retry after 914.835µs: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.678952  282920 retry.go:31] will retry after 1.272102ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.681106  282920 retry.go:31] will retry after 2.196667ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.684292  282920 retry.go:31] will retry after 2.210393ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.687463  282920 retry.go:31] will retry after 5.645007ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.693673  282920 retry.go:31] will retry after 7.281204ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.701904  282920 retry.go:31] will retry after 5.322635ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.708132  282920 retry.go:31] will retry after 6.657469ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.715362  282920 retry.go:31] will retry after 27.501045ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.743583  282920 retry.go:31] will retry after 28.447931ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
I1011 21:38:10.772827  282920 retry.go:31] will retry after 34.559651ms: open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/scheduled-stop-984176/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984176 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984176 -n scheduled-stop-984176
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-984176
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984176 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-984176
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-984176: exit status 7 (78.559227ms)

                                                
                                                
-- stdout --
	scheduled-stop-984176
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984176 -n scheduled-stop-984176
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984176 -n scheduled-stop-984176: exit status 7 (70.046981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-984176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-984176
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-984176: (4.806463367s)
--- PASS: TestScheduledStopUnix (109.98s)

                                                
                                    
x
+
TestInsufficientStorage (10.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-247956 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-247956 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.335646452s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70da7033-01d5-44c7-ab88-a520bc5d6da2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-247956] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1472fb9c-7646-49f8-93d3-886f32762515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"33bb62a1-402e-4cea-ac2a-83b51b4f5418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9873d9a0-d713-492a-a563-93f83cbcedf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig"}}
	{"specversion":"1.0","id":"b5d25369-0218-4c0f-9895-1489eab993e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube"}}
	{"specversion":"1.0","id":"7b86c7d0-c724-4756-ba4b-a230fcaf5c5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5722e23c-a3ac-47e2-9a6f-0e673cafd7a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b7bee9a9-5729-4ba0-b434-44d50a7356b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"791acbc2-fd13-40cc-97fb-51a644ed0fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ccc43fff-432c-45d6-aec4-a45b58a58e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f22c50b3-09b1-48e0-86f1-197bb36245c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c5bbb4e3-f5f6-4a36-ad7b-0e10edb9198f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-247956\" primary control-plane node in \"insufficient-storage-247956\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"84d21cf5-7368-46d5-9847-b0e8f7371449","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bde2b63c-e30c-4031-8c64-bedd2cf4e942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f3ee1c6-4824-4302-9d54-03d894cad4af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-247956 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-247956 --output=json --layout=cluster: exit status 7 (282.46369ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247956","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247956","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 21:39:35.144054  421583 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247956" does not appear in /home/jenkins/minikube-integration/19749-277533/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-247956 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-247956 --output=json --layout=cluster: exit status 7 (295.89397ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247956","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247956","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 21:39:35.440914  421643 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247956" does not appear in /home/jenkins/minikube-integration/19749-277533/kubeconfig
	E1011 21:39:35.451174  421643 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/insufficient-storage-247956/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-247956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-247956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-247956: (1.852610983s)
--- PASS: TestInsufficientStorage (10.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1244755936 start -p running-upgrade-721521 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1011 21:44:43.553387  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1244755936 start -p running-upgrade-721521 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.311504914s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-721521 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-721521 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.405529878s)
helpers_test.go:175: Cleaning up "running-upgrade-721521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-721521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-721521: (2.934871546s)
--- PASS: TestRunningBinaryUpgrade (82.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (387.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1011 21:41:40.480730  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:41:41.937437  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.854977469s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-642758
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-642758: (1.248589335s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-642758 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-642758 status --format={{.Host}}: exit status 7 (105.575018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.032429755s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-642758 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (129.965843ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642758] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642758
	    minikube start -p kubernetes-upgrade-642758 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6427582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-642758 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642758 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.175019772s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-642758
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-642758: (2.179274581s)
--- PASS: TestKubernetesUpgrade (387.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (156.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1918257558 start -p missing-upgrade-798016 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1918257558 start -p missing-upgrade-798016 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.343892031s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-798016
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-798016: (10.422897885s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-798016
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-798016 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-798016 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.307638852s)
helpers_test.go:175: Cleaning up "missing-upgrade-798016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-798016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-798016: (1.976700843s)
--- PASS: TestMissingContainerUpgrade (156.98s)

                                                
                                    
x
+
TestPause/serial/Start (60.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-132908 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-132908 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m0.736387459s)
--- PASS: TestPause/serial/Start (60.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (112.725535ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-432944] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-432944 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-432944 --driver=docker  --container-runtime=crio: (41.8843772s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-432944 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --driver=docker  --container-runtime=crio: (5.476681294s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-432944 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-432944 status -o json: exit status 2 (297.992951ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-432944","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-432944
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-432944: (1.915346263s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-432944 --no-kubernetes --driver=docker  --container-runtime=crio: (8.220346608s)
--- PASS: TestNoKubernetes/serial/Start (8.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-432944 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-432944 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.498267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-432944
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-432944: (1.240744688s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-132908 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-132908 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.765459023s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-432944 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-432944 --driver=docker  --container-runtime=crio: (7.366637689s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-432944 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-432944 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.574853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-132908 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-132908 --alsologtostderr -v=5: (1.094896765s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-132908 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-132908 --output=json --layout=cluster: exit status 2 (435.548485ms)

                                                
                                                
-- stdout --
	{"Name":"pause-132908","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-132908","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-132908 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.24s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-132908 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-132908 --alsologtostderr -v=5: (1.240675634s)
--- PASS: TestPause/serial/PauseAgain (1.24s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-132908 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-132908 --alsologtostderr -v=5: (3.455642053s)
--- PASS: TestPause/serial/DeletePaused (3.46s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-132908
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-132908: exit status 1 (15.45817ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-132908: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1560035742 start -p stopped-upgrade-323478 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1560035742 start -p stopped-upgrade-323478 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.672341963s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1560035742 -p stopped-upgrade-323478 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1560035742 -p stopped-upgrade-323478 stop: (2.49845187s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-323478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-323478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.86255284s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-323478
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-323478: (1.068467986s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-511191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-511191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (179.204429ms)

                                                
                                                
-- stdout --
	* [false-511191] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:46:40.979738  459402 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:46:40.980180  459402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:46:40.980193  459402 out.go:358] Setting ErrFile to fd 2...
	I1011 21:46:40.980199  459402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:46:40.980454  459402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-277533/.minikube/bin
	I1011 21:46:40.980871  459402 out.go:352] Setting JSON to false
	I1011 21:46:40.981782  459402 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12544,"bootTime":1728670657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1011 21:46:40.981854  459402 start.go:139] virtualization:  
	I1011 21:46:40.985303  459402 out.go:177] * [false-511191] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:46:40.988597  459402 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:46:40.988664  459402 notify.go:220] Checking for updates...
	I1011 21:46:40.993302  459402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:46:40.995765  459402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-277533/kubeconfig
	I1011 21:46:40.998369  459402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-277533/.minikube
	I1011 21:46:41.000923  459402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:46:41.003349  459402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:46:41.006635  459402 config.go:182] Loaded profile config "kubernetes-upgrade-642758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:46:41.006816  459402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:46:41.035918  459402 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:46:41.036058  459402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:46:41.093273  459402 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:46:41.083186857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:46:41.093396  459402 docker.go:318] overlay module found
	I1011 21:46:41.096066  459402 out.go:177] * Using the docker driver based on user configuration
	I1011 21:46:41.098522  459402 start.go:297] selected driver: docker
	I1011 21:46:41.098541  459402 start.go:901] validating driver "docker" against <nil>
	I1011 21:46:41.098557  459402 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:46:41.101662  459402 out.go:201] 
	W1011 21:46:41.104197  459402 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1011 21:46:41.106864  459402 out.go:201] 

                                                
                                                
** /stderr **
E1011 21:46:41.937393  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-511191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642758
contexts:
- context:
cluster: kubernetes-upgrade-642758
user: kubernetes-upgrade-642758
name: kubernetes-upgrade-642758
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-642758
user:
client-certificate: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.crt
client-key: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-511191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511191"

                                                
                                                
----------------------- debugLogs end: false-511191 [took: 3.640384307s] --------------------------------
helpers_test.go:175: Cleaning up "false-511191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-511191
--- PASS: TestNetworkPlugins/group/false (3.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-357971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1011 21:49:45.011187  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-357971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m35.342651864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-357971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9abcee9a-8388-4aa2-85fa-5460d4d4fe66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9abcee9a-8388-4aa2-85fa-5460d4d4fe66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004175091s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-357971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-357971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-357971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-357971 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-357971 --alsologtostderr -v=3: (12.007879644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357971 -n old-k8s-version-357971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357971 -n old-k8s-version-357971: exit status 7 (73.773606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-357971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (37.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-357971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1011 21:51:40.481193  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:51:41.937813  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-357971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (36.762594959s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357971 -n old-k8s-version-357971
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (37.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-431868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-431868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m6.842125261s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5x8kq" [14e0e4bb-811e-49e7-89ff-dff1d01483c4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5x8kq" [14e0e4bb-811e-49e7-89ff-dff1d01483c4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 29.004254207s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5x8kq" [14e0e4bb-811e-49e7-89ff-dff1d01483c4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004009257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-357971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-357971 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-357971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-357971 --alsologtostderr -v=1: (1.177545151s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357971 -n old-k8s-version-357971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357971 -n old-k8s-version-357971: exit status 2 (362.387666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-357971 -n old-k8s-version-357971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-357971 -n old-k8s-version-357971: exit status 2 (392.079875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-357971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357971 -n old-k8s-version-357971
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-357971 -n old-k8s-version-357971
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-309024 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-309024 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (52.758592843s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-431868 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b07059aa-397b-4432-a261-719c33c2ede7] Pending
helpers_test.go:344: "busybox" [b07059aa-397b-4432-a261-719c33c2ede7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b07059aa-397b-4432-a261-719c33c2ede7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003148781s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-431868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-431868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-431868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.135348112s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-431868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-431868 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-431868 --alsologtostderr -v=3: (12.120622945s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-431868 -n no-preload-431868
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-431868 -n no-preload-431868: exit status 7 (72.958133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-431868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (305.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-431868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-431868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m4.874705337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-431868 -n no-preload-431868
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (305.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309024 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ef5607a-87ed-41ce-a3ec-0b1f4b2af75b] Pending
helpers_test.go:344: "busybox" [3ef5607a-87ed-41ce-a3ec-0b1f4b2af75b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ef5607a-87ed-41ce-a3ec-0b1f4b2af75b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004593847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309024 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-309024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-309024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.464362096s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-309024 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-309024 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-309024 --alsologtostderr -v=3: (12.523889689s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-309024 -n embed-certs-309024
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-309024 -n embed-certs-309024: exit status 7 (83.652063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-309024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (281.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-309024 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 21:55:52.483522  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.489912  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.501402  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.522804  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.564248  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.645713  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:52.807126  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:53.128867  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:53.771216  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:55.054185  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:55:57.616243  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:56:02.738116  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:56:12.980375  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:56:33.462216  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:56:40.481117  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:56:41.937302  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:14.424084  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-309024 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m40.792183327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-309024 -n embed-certs-309024
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (281.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bz8h8" [6c6a091a-8b47-4fe0-91f8-ae1733eb8fb0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004523572s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bz8h8" [6c6a091a-8b47-4fe0-91f8-ae1733eb8fb0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003709088s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-431868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-431868 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bn6tp" [731b3202-110c-4636-87a6-5953d45d50e5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005737365s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-431868 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-431868 -n no-preload-431868
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-431868 -n no-preload-431868: exit status 2 (318.600773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-431868 -n no-preload-431868
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-431868 -n no-preload-431868: exit status 2 (333.45354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-431868 --alsologtostderr -v=1
E1011 21:58:36.346460  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-431868 -n no-preload-431868
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-431868 -n no-preload-431868
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bn6tp" [731b3202-110c-4636-87a6-5953d45d50e5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003594678s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-309024 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-794578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-794578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m0.392940456s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-309024 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-309024 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-309024 --alsologtostderr -v=1: (1.094049057s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-309024 -n embed-certs-309024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-309024 -n embed-certs-309024: exit status 2 (596.788683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-309024 -n embed-certs-309024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-309024 -n embed-certs-309024: exit status 2 (595.017493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-309024 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-309024 -n embed-certs-309024
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-309024 -n embed-certs-309024
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-014771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-014771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (43.126104574s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-014771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-014771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.398991436s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-014771 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-014771 --alsologtostderr -v=3: (1.315859143s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-014771 -n newest-cni-014771
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-014771 -n newest-cni-014771: exit status 7 (121.149067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-014771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-014771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-014771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (18.110684867s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-014771 -n newest-cni-014771
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-794578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [63da4325-4160-4a40-83c3-e55a02593cbe] Pending
helpers_test.go:344: "busybox" [63da4325-4160-4a40-83c3-e55a02593cbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [63da4325-4160-4a40-83c3-e55a02593cbe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003594707s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-794578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-794578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-794578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.489839183s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-794578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-794578 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-794578 --alsologtostderr -v=3: (14.531076943s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-014771 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-014771 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-014771 -n newest-cni-014771
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-014771 -n newest-cni-014771: exit status 2 (308.935262ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-014771 -n newest-cni-014771
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-014771 -n newest-cni-014771: exit status 2 (309.90238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-014771 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-014771 -n newest-cni-014771
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-014771 -n newest-cni-014771
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (56.670998078s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578: exit status 7 (101.064633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-794578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (274.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-794578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 22:00:52.483507  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-794578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m34.180899485s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (274.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-511191 "pgrep -a kubelet"
I1011 22:01:00.547733  282920 config.go:182] Loaded profile config "auto-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gqszm" [66f7e05a-d7d6-406a-96a9-c09fab7f2ac5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gqszm" [66f7e05a-d7d6-406a-96a9-c09fab7f2ac5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004306395s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-511191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1011 22:01:40.480885  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:01:41.937410  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.318089153s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wvwcs" [80c04bb3-8a36-44f5-b6cf-3d058261db06] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004827163s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-511191 "pgrep -a kubelet"
I1011 22:02:31.188792  282920 config.go:182] Loaded profile config "kindnet-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-btct2" [fbc1005b-a868-43c3-9303-c20ce638331b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-btct2" [fbc1005b-a868-43c3-9303-c20ce638331b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004523436s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-511191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1011 22:03:03.184728  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:03:13.426109  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:03:33.908066  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.847057952s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sdjrn" [9c970810-b8a5-47c3-ac48-263d606d9a4b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00440325s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-511191 "pgrep -a kubelet"
I1011 22:04:09.054829  282920 config.go:182] Loaded profile config "calico-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h9zpm" [fd2fbe01-bd8c-4cc1-bd99-648440cb7d07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h9zpm" [fd2fbe01-bd8c-4cc1-bd99-648440cb7d07] Running
E1011 22:04:14.870186  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003950029s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-511191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.50423283s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qwlkr" [1631737a-8d95-47d3-946c-c3c0da8111eb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021542592s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qwlkr" [1631737a-8d95-47d3-946c-c3c0da8111eb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003740887s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-794578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-794578 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-794578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578: exit status 2 (378.511532ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578: exit status 2 (377.84271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-794578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-794578 -n default-k8s-diff-port-794578
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.70s)
E1011 22:07:52.929345  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1011 22:05:36.792044  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/no-preload-431868/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (48.159081799s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-511191 "pgrep -a kubelet"
I1011 22:05:44.759099  282920 config.go:182] Loaded profile config "custom-flannel-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6rqbt" [d7e8b275-4a74-4d88-a33b-7998567ead8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6rqbt" [d7e8b275-4a74-4d88-a33b-7998567ead8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004629894s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-511191 "pgrep -a kubelet"
I1011 22:05:50.715945  282920 config.go:182] Loaded profile config "enable-default-cni-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kpm6w" [b63ddd79-98a3-49ec-a9d0-46905f01fea0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:05:52.483610  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/old-k8s-version-357971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kpm6w" [b63ddd79-98a3-49ec-a9d0-46905f01fea0] Running
E1011 22:06:00.803599  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:00.809983  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:00.821470  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:00.842966  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:00.884383  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:00.966392  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003960589s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-511191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511191 exec deployment/netcat -- nslookup kubernetes.default
E1011 22:06:01.130507  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1011 22:06:01.452390  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1011 22:06:21.301999  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.307903508s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1011 22:06:40.481024  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:41.783998  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:41.936850  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/functional-824457/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-511191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.044547219s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cp8lz" [ed0cc918-4088-4c15-ab1f-ff269b8910aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004799702s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-511191 "pgrep -a kubelet"
I1011 22:07:18.710645  282920 config.go:182] Loaded profile config "flannel-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gbp4z" [cdfa042f-a253-46cc-9a73-ea52a7e4e892] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:07:22.745418  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/auto-511191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gbp4z" [cdfa042f-a253-46cc-9a73-ea52a7e4e892] Running
E1011 22:07:24.891035  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:24.897432  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:24.908773  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:24.930175  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:24.971534  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:25.053687  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:25.215265  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:25.536929  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:26.179146  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:27.460585  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003465295s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-511191 exec deployment/netcat -- nslookup kubernetes.default
E1011 22:07:30.021986  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-511191 "pgrep -a kubelet"
I1011 22:07:44.763950  282920 config.go:182] Loaded profile config "bridge-511191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-511191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9plfv" [ca7ad396-e662-4008-afa3-172da2dd9fd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:07:45.386159  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kindnet-511191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9plfv" [ca7ad396-e662-4008-afa3-172da2dd9fd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003787857s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-511191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-511191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-358295 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-358295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-358295
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-627736 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-333600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-333600
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1011 21:46:40.480778  282920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/addons-627736/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: kubenet-511191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642758
contexts:
- context:
cluster: kubernetes-upgrade-642758
user: kubernetes-upgrade-642758
name: kubernetes-upgrade-642758
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-642758
user:
client-certificate: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.crt
client-key: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-511191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511191"

                                                
                                                
----------------------- debugLogs end: kubenet-511191 [took: 3.683751087s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-511191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-511191
--- SKIP: TestNetworkPlugins/group/kubenet (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-511191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-511191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-277533/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642758
contexts:
- context:
cluster: kubernetes-upgrade-642758
user: kubernetes-upgrade-642758
name: kubernetes-upgrade-642758
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-642758
user:
client-certificate: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.crt
client-key: /home/jenkins/minikube-integration/19749-277533/.minikube/profiles/kubernetes-upgrade-642758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-511191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-511191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511191"

                                                
                                                
----------------------- debugLogs end: cilium-511191 [took: 4.515055084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-511191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-511191
--- SKIP: TestNetworkPlugins/group/cilium (4.70s)

                                                
                                    
Copied to clipboard