Test Report: Docker_Linux_crio_arm64 20354

                    
                      f4981b37cef8a8edf9576fbca56a900d4b787caa:2025-02-03:38193
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 152.64
x
+
TestAddons/parallel/Ingress (152.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-595492 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-595492 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-595492 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7dfea6a0-d963-450f-94ca-953599151a62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7dfea6a0-d963-450f-94ca-953599151a62] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003693082s
I0203 11:17:31.656620  298903 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-595492 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.778587753s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-595492 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-595492
helpers_test.go:235: (dbg) docker inspect addons-595492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd",
	        "Created": "2025-02-03T11:13:39.359007978Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300165,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-03T11:13:39.510199942Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd/hosts",
	        "LogPath": "/var/lib/docker/containers/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd-json.log",
	        "Name": "/addons-595492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-595492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-595492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/980f008272b7f38b3385d966cf1ca8912793d1fdfa86c52aeb32c15fd5ef9891-init/diff:/var/lib/docker/overlay2/8599f4284c639846bc4bd94dabcc376107acf6324c6aa204b88d816eb746cc28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/980f008272b7f38b3385d966cf1ca8912793d1fdfa86c52aeb32c15fd5ef9891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/980f008272b7f38b3385d966cf1ca8912793d1fdfa86c52aeb32c15fd5ef9891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/980f008272b7f38b3385d966cf1ca8912793d1fdfa86c52aeb32c15fd5ef9891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-595492",
	                "Source": "/var/lib/docker/volumes/addons-595492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-595492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-595492",
	                "name.minikube.sigs.k8s.io": "addons-595492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad1dabedaa7c17cfec14ad2df01bb35620b17b3ca7ac50268bd05f810c59509c",
	            "SandboxKey": "/var/run/docker/netns/ad1dabedaa7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-595492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1417ada8252146e2f543c0cef2718565f0515da067b7c98ed9504a288ba9fbf3",
	                    "EndpointID": "4e1e7681d33945e615663a2cbf0faea1fcea1b91327de9eee017c16ab3adf9c0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-595492",
	                        "48839ec78b4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-595492 -n addons-595492
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 logs -n 25: (1.67366086s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-606227                                                                     | download-only-606227   | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	| start   | --download-only -p                                                                          | download-docker-762626 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | download-docker-762626                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-762626                                                                   | download-docker-762626 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-482869   | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | binary-mirror-482869                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36113                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-482869                                                                     | binary-mirror-482869   | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	| addons  | enable dashboard -p                                                                         | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | addons-595492                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | addons-595492                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-595492 --wait=true                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:16 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:16 UTC | 03 Feb 25 11:16 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:16 UTC | 03 Feb 25 11:16 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:16 UTC | 03 Feb 25 11:16 UTC |
	|         | -p addons-595492                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:16 UTC | 03 Feb 25 11:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-595492 ip                                                                            | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-595492 ssh curl -s                                                                   | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:17 UTC | 03 Feb 25 11:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-595492 ssh cat                                                                       | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:18 UTC | 03 Feb 25 11:18 UTC |
	|         | /opt/local-path-provisioner/pvc-0ef87c84-f946-4eaa-bcc6-293143cf15da_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:18 UTC | 03 Feb 25 11:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-595492 addons disable                                                                | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:18 UTC | 03 Feb 25 11:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:19 UTC | 03 Feb 25 11:19 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-595492 addons                                                                        | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:19 UTC | 03 Feb 25 11:19 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-595492 ip                                                                            | addons-595492          | jenkins | v1.35.0 | 03 Feb 25 11:19 UTC | 03 Feb 25 11:19 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:13:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:13:14.178202  299665 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:13:14.178374  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:14.178383  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:13:14.178389  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:14.178721  299665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:13:14.179332  299665 out.go:352] Setting JSON to false
	I0203 11:13:14.180388  299665 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6924,"bootTime":1738574271,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:13:14.180466  299665 start.go:139] virtualization:  
	I0203 11:13:14.184255  299665 out.go:177] * [addons-595492] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0203 11:13:14.187338  299665 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:13:14.187506  299665 notify.go:220] Checking for updates...
	I0203 11:13:14.193217  299665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:13:14.196310  299665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:13:14.199315  299665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:13:14.202247  299665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0203 11:13:14.205277  299665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:13:14.208417  299665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:13:14.233562  299665 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:13:14.233681  299665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:14.302564  299665 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-03 11:13:14.293032777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:14.302677  299665 docker.go:318] overlay module found
	I0203 11:13:14.305769  299665 out.go:177] * Using the docker driver based on user configuration
	I0203 11:13:14.308724  299665 start.go:297] selected driver: docker
	I0203 11:13:14.308750  299665 start.go:901] validating driver "docker" against <nil>
	I0203 11:13:14.308767  299665 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:13:14.309499  299665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:14.362886  299665 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-03 11:13:14.353769419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:14.363110  299665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:13:14.363338  299665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:13:14.366204  299665 out.go:177] * Using Docker driver with root privileges
	I0203 11:13:14.369042  299665 cni.go:84] Creating CNI manager for ""
	I0203 11:13:14.369115  299665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0203 11:13:14.369140  299665 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 11:13:14.369229  299665 start.go:340] cluster config:
	{Name:addons-595492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-595492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0203 11:13:14.374184  299665 out.go:177] * Starting "addons-595492" primary control-plane node in "addons-595492" cluster
	I0203 11:13:14.376900  299665 cache.go:121] Beginning downloading kic base image for docker with crio
	I0203 11:13:14.379858  299665 out.go:177] * Pulling base image v0.0.46 ...
	I0203 11:13:14.382662  299665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:13:14.382717  299665 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0203 11:13:14.382729  299665 cache.go:56] Caching tarball of preloaded images
	I0203 11:13:14.382834  299665 preload.go:172] Found /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0203 11:13:14.382849  299665 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:13:14.383213  299665 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/config.json ...
	I0203 11:13:14.383240  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/config.json: {Name:mkc31103dc712603dad40fda84d491ee10140a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:14.383347  299665 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0203 11:13:14.399165  299665 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0203 11:13:14.399288  299665 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0203 11:13:14.399313  299665 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0203 11:13:14.399322  299665 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0203 11:13:14.399337  299665 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0203 11:13:14.399343  299665 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0203 11:13:31.524479  299665 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0203 11:13:31.524521  299665 cache.go:230] Successfully downloaded all kic artifacts
	I0203 11:13:31.524591  299665 start.go:360] acquireMachinesLock for addons-595492: {Name:mk9ef34c15e66dbc9e4cde0ad56084adf156a44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:13:31.524737  299665 start.go:364] duration metric: took 121.485µs to acquireMachinesLock for "addons-595492"
	I0203 11:13:31.524771  299665 start.go:93] Provisioning new machine with config: &{Name:addons-595492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-595492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:13:31.524856  299665 start.go:125] createHost starting for "" (driver="docker")
	I0203 11:13:31.528338  299665 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0203 11:13:31.528629  299665 start.go:159] libmachine.API.Create for "addons-595492" (driver="docker")
	I0203 11:13:31.528670  299665 client.go:168] LocalClient.Create starting
	I0203 11:13:31.528795  299665 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem
	I0203 11:13:31.828216  299665 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/cert.pem
	I0203 11:13:32.802838  299665 cli_runner.go:164] Run: docker network inspect addons-595492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 11:13:32.818083  299665 cli_runner.go:211] docker network inspect addons-595492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 11:13:32.818172  299665 network_create.go:284] running [docker network inspect addons-595492] to gather additional debugging logs...
	I0203 11:13:32.818196  299665 cli_runner.go:164] Run: docker network inspect addons-595492
	W0203 11:13:32.833889  299665 cli_runner.go:211] docker network inspect addons-595492 returned with exit code 1
	I0203 11:13:32.833924  299665 network_create.go:287] error running [docker network inspect addons-595492]: docker network inspect addons-595492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-595492 not found
	I0203 11:13:32.833939  299665 network_create.go:289] output of [docker network inspect addons-595492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-595492 not found
	
	** /stderr **
	I0203 11:13:32.834041  299665 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 11:13:32.850444  299665 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b13e90}
	I0203 11:13:32.850491  299665 network_create.go:124] attempt to create docker network addons-595492 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0203 11:13:32.850553  299665 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-595492 addons-595492
	I0203 11:13:32.921006  299665 network_create.go:108] docker network addons-595492 192.168.49.0/24 created
	I0203 11:13:32.921041  299665 kic.go:121] calculated static IP "192.168.49.2" for the "addons-595492" container
	I0203 11:13:32.921118  299665 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 11:13:32.937417  299665 cli_runner.go:164] Run: docker volume create addons-595492 --label name.minikube.sigs.k8s.io=addons-595492 --label created_by.minikube.sigs.k8s.io=true
	I0203 11:13:32.955181  299665 oci.go:103] Successfully created a docker volume addons-595492
	I0203 11:13:32.955281  299665 cli_runner.go:164] Run: docker run --rm --name addons-595492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-595492 --entrypoint /usr/bin/test -v addons-595492:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0203 11:13:35.048260  299665 cli_runner.go:217] Completed: docker run --rm --name addons-595492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-595492 --entrypoint /usr/bin/test -v addons-595492:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (2.092935044s)
	I0203 11:13:35.048291  299665 oci.go:107] Successfully prepared a docker volume addons-595492
	I0203 11:13:35.048315  299665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:13:35.048346  299665 kic.go:194] Starting extracting preloaded images to volume ...
	I0203 11:13:35.048420  299665 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-595492:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 11:13:39.285926  299665 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-595492:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.23746034s)
	I0203 11:13:39.285960  299665 kic.go:203] duration metric: took 4.237621316s to extract preloaded images to volume ...
	W0203 11:13:39.286113  299665 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0203 11:13:39.286242  299665 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 11:13:39.344755  299665 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-595492 --name addons-595492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-595492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-595492 --network addons-595492 --ip 192.168.49.2 --volume addons-595492:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0203 11:13:39.687581  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Running}}
	I0203 11:13:39.711868  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:13:39.736495  299665 cli_runner.go:164] Run: docker exec addons-595492 stat /var/lib/dpkg/alternatives/iptables
	I0203 11:13:39.798746  299665 oci.go:144] the created container "addons-595492" has a running status.
	I0203 11:13:39.798778  299665 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa...
	I0203 11:13:40.305123  299665 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 11:13:40.329753  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:13:40.352928  299665 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 11:13:40.352948  299665 kic_runner.go:114] Args: [docker exec --privileged addons-595492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 11:13:40.432108  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:13:40.454726  299665 machine.go:93] provisionDockerMachine start ...
	I0203 11:13:40.454983  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:40.474857  299665 main.go:141] libmachine: Using SSH client type: native
	I0203 11:13:40.475128  299665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0203 11:13:40.475138  299665 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:13:40.623945  299665 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-595492
	
	I0203 11:13:40.624028  299665 ubuntu.go:169] provisioning hostname "addons-595492"
	I0203 11:13:40.624126  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:40.647974  299665 main.go:141] libmachine: Using SSH client type: native
	I0203 11:13:40.648230  299665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0203 11:13:40.648242  299665 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-595492 && echo "addons-595492" | sudo tee /etc/hostname
	I0203 11:13:40.797899  299665 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-595492
	
	I0203 11:13:40.798058  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:40.820281  299665 main.go:141] libmachine: Using SSH client type: native
	I0203 11:13:40.820535  299665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0203 11:13:40.820557  299665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-595492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-595492/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-595492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:13:40.940587  299665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:13:40.940654  299665 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20354-293520/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-293520/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-293520/.minikube}
	I0203 11:13:40.940695  299665 ubuntu.go:177] setting up certificates
	I0203 11:13:40.940708  299665 provision.go:84] configureAuth start
	I0203 11:13:40.940783  299665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-595492
	I0203 11:13:40.958156  299665 provision.go:143] copyHostCerts
	I0203 11:13:40.958249  299665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-293520/.minikube/ca.pem (1082 bytes)
	I0203 11:13:40.958375  299665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-293520/.minikube/cert.pem (1123 bytes)
	I0203 11:13:40.958439  299665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-293520/.minikube/key.pem (1679 bytes)
	I0203 11:13:40.958493  299665 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-293520/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca-key.pem org=jenkins.addons-595492 san=[127.0.0.1 192.168.49.2 addons-595492 localhost minikube]
	I0203 11:13:41.286296  299665 provision.go:177] copyRemoteCerts
	I0203 11:13:41.286367  299665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:13:41.286408  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.304166  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:13:41.397305  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:13:41.421174  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:13:41.445005  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:13:41.468522  299665 provision.go:87] duration metric: took 527.786588ms to configureAuth
	I0203 11:13:41.468546  299665 ubuntu.go:193] setting minikube options for container-runtime
	I0203 11:13:41.468755  299665 config.go:182] Loaded profile config "addons-595492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:13:41.468865  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.485267  299665 main.go:141] libmachine: Using SSH client type: native
	I0203 11:13:41.485513  299665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0203 11:13:41.485536  299665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:13:41.707218  299665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:13:41.707240  299665 machine.go:96] duration metric: took 1.252492942s to provisionDockerMachine
	I0203 11:13:41.707251  299665 client.go:171] duration metric: took 10.178572125s to LocalClient.Create
	I0203 11:13:41.707264  299665 start.go:167] duration metric: took 10.178638276s to libmachine.API.Create "addons-595492"
	I0203 11:13:41.707272  299665 start.go:293] postStartSetup for "addons-595492" (driver="docker")
	I0203 11:13:41.707282  299665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:13:41.707347  299665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:13:41.707388  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.725103  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:13:41.813364  299665 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:13:41.816341  299665 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 11:13:41.816378  299665 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 11:13:41.816392  299665 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 11:13:41.816399  299665 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0203 11:13:41.816415  299665 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-293520/.minikube/addons for local assets ...
	I0203 11:13:41.816485  299665 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-293520/.minikube/files for local assets ...
	I0203 11:13:41.816511  299665 start.go:296] duration metric: took 109.233672ms for postStartSetup
	I0203 11:13:41.816845  299665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-595492
	I0203 11:13:41.833039  299665 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/config.json ...
	I0203 11:13:41.833314  299665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:13:41.833367  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.849681  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:13:41.937140  299665 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 11:13:41.941431  299665 start.go:128] duration metric: took 10.416557661s to createHost
	I0203 11:13:41.941459  299665 start.go:83] releasing machines lock for "addons-595492", held for 10.416706895s
	I0203 11:13:41.941529  299665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-595492
	I0203 11:13:41.960639  299665 ssh_runner.go:195] Run: cat /version.json
	I0203 11:13:41.960701  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.960948  299665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:13:41.961002  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:13:41.978651  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:13:41.988683  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:13:42.068333  299665 ssh_runner.go:195] Run: systemctl --version
	I0203 11:13:42.204376  299665 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:13:42.353027  299665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 11:13:42.357605  299665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:13:42.381169  299665 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0203 11:13:42.381296  299665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:13:42.415286  299665 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 11:13:42.415326  299665 start.go:495] detecting cgroup driver to use...
	I0203 11:13:42.415361  299665 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0203 11:13:42.415429  299665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:13:42.431935  299665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:13:42.444288  299665 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:13:42.444390  299665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:13:42.460281  299665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:13:42.475772  299665 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:13:42.562806  299665 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:13:42.659936  299665 docker.go:233] disabling docker service ...
	I0203 11:13:42.660057  299665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:13:42.680763  299665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:13:42.693116  299665 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:13:42.781772  299665 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:13:42.894841  299665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:13:42.907780  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:13:42.924518  299665 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:13:42.924605  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.934777  299665 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:13:42.934853  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.944949  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.954385  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.964138  299665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:13:42.973339  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.982738  299665 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:42.998557  299665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:13:43.013067  299665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:13:43.022011  299665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:13:43.030782  299665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:13:43.118090  299665 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:13:43.235129  299665 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:13:43.235220  299665 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:13:43.238832  299665 start.go:563] Will wait 60s for crictl version
	I0203 11:13:43.238907  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:13:43.243276  299665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:13:43.286343  299665 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0203 11:13:43.286440  299665 ssh_runner.go:195] Run: crio --version
	I0203 11:13:43.326618  299665 ssh_runner.go:195] Run: crio --version
	I0203 11:13:43.369125  299665 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0203 11:13:43.372198  299665 cli_runner.go:164] Run: docker network inspect addons-595492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 11:13:43.390568  299665 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0203 11:13:43.393953  299665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:13:43.404584  299665 kubeadm.go:883] updating cluster {Name:addons-595492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-595492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:13:43.404714  299665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:13:43.404776  299665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:13:43.481342  299665 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:13:43.481363  299665 crio.go:433] Images already preloaded, skipping extraction
	I0203 11:13:43.481422  299665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:13:43.526697  299665 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:13:43.526721  299665 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:13:43.526730  299665 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0203 11:13:43.526818  299665 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-595492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-595492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:13:43.526901  299665 ssh_runner.go:195] Run: crio config
	I0203 11:13:43.574522  299665 cni.go:84] Creating CNI manager for ""
	I0203 11:13:43.574546  299665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0203 11:13:43.574557  299665 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:13:43.574603  299665 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-595492 NodeName:addons-595492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:13:43.574771  299665 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-595492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:13:43.574844  299665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:13:43.584621  299665 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:13:43.584784  299665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:13:43.596001  299665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0203 11:13:43.616474  299665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:13:43.636359  299665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0203 11:13:43.656262  299665 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0203 11:13:43.659904  299665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:13:43.671179  299665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:13:43.766156  299665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:13:43.781338  299665 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492 for IP: 192.168.49.2
	I0203 11:13:43.781407  299665 certs.go:194] generating shared ca certs ...
	I0203 11:13:43.781439  299665 certs.go:226] acquiring lock for ca certs: {Name:mk4f223149c5bfcde67271a3237c074306b330a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:43.781617  299665 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-293520/.minikube/ca.key
	I0203 11:13:44.190505  299665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt ...
	I0203 11:13:44.190591  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt: {Name:mkfbf15d77c83ebfc1e26df37cd3fbade84c274e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.191915  299665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-293520/.minikube/ca.key ...
	I0203 11:13:44.191978  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/ca.key: {Name:mk0952537f31c11d0ef2800f098a7e074eb10b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.193087  299665 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.key
	I0203 11:13:44.422480  299665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.crt ...
	I0203 11:13:44.422514  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.crt: {Name:mk3632f62375c23df2bc9405c397e2fa4624ff0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.423287  299665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.key ...
	I0203 11:13:44.423306  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.key: {Name:mk834089f9752a62b1dedbe2da092054c4b4edd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.423935  299665 certs.go:256] generating profile certs ...
	I0203 11:13:44.424022  299665 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.key
	I0203 11:13:44.424059  299665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt with IP's: []
	I0203 11:13:44.567763  299665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt ...
	I0203 11:13:44.567800  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: {Name:mke6e175f253a83ed03cc52487ca1d96334af82c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.567981  299665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.key ...
	I0203 11:13:44.567998  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.key: {Name:mka38aaa53bcf0f183e5c5619700383b462ad5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:44.568079  299665 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key.76e20974
	I0203 11:13:44.568101  299665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt.76e20974 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0203 11:13:45.084435  299665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt.76e20974 ...
	I0203 11:13:45.084474  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt.76e20974: {Name:mk6f9bdba77b8161b47994e10e3d70d89c6d33b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:45.084707  299665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key.76e20974 ...
	I0203 11:13:45.084721  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key.76e20974: {Name:mkb56ef159ad69a5fb182a36a593666316e29c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:45.085399  299665 certs.go:381] copying /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt.76e20974 -> /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt
	I0203 11:13:45.085531  299665 certs.go:385] copying /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key.76e20974 -> /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key
	I0203 11:13:45.085606  299665 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.key
	I0203 11:13:45.085640  299665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.crt with IP's: []
	I0203 11:13:45.838373  299665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.crt ...
	I0203 11:13:45.838407  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.crt: {Name:mk506be011f4e8157e1e6f07b379ed6b5448520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:45.838605  299665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.key ...
	I0203 11:13:45.838621  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.key: {Name:mk07cca49e7b2055408d8311638c9df619debdbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:45.838858  299665 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 11:13:45.838905  299665 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/ca.pem (1082 bytes)
	I0203 11:13:45.838937  299665 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:13:45.838969  299665 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-293520/.minikube/certs/key.pem (1679 bytes)
	I0203 11:13:45.839686  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:13:45.866611  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:13:45.891189  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:13:45.915424  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:13:45.939161  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0203 11:13:45.962589  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:13:45.986772  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:13:46.013930  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:13:46.044705  299665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:13:46.069963  299665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:13:46.088557  299665 ssh_runner.go:195] Run: openssl version
	I0203 11:13:46.096660  299665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:13:46.106056  299665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:13:46.109422  299665 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 11:13 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:13:46.109489  299665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:13:46.116187  299665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:13:46.125407  299665 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:13:46.128463  299665 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:13:46.128512  299665 kubeadm.go:392] StartCluster: {Name:addons-595492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-595492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:13:46.128620  299665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:13:46.128681  299665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:13:46.166615  299665 cri.go:89] found id: ""
	I0203 11:13:46.166724  299665 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:13:46.175369  299665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:13:46.184044  299665 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0203 11:13:46.184144  299665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:13:46.193032  299665 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:13:46.193111  299665 kubeadm.go:157] found existing configuration files:
	
	I0203 11:13:46.193171  299665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:13:46.201951  299665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:13:46.202036  299665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:13:46.210341  299665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:13:46.219093  299665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:13:46.219188  299665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:13:46.227498  299665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:13:46.236283  299665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:13:46.236403  299665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:13:46.244980  299665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:13:46.253470  299665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:13:46.253557  299665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:13:46.261773  299665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 11:13:46.302275  299665 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0203 11:13:46.302719  299665 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:13:46.322658  299665 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0203 11:13:46.322730  299665 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0203 11:13:46.322773  299665 kubeadm.go:310] OS: Linux
	I0203 11:13:46.322825  299665 kubeadm.go:310] CGROUPS_CPU: enabled
	I0203 11:13:46.322878  299665 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0203 11:13:46.322940  299665 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0203 11:13:46.322995  299665 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0203 11:13:46.323048  299665 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0203 11:13:46.323111  299665 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0203 11:13:46.323161  299665 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0203 11:13:46.323215  299665 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0203 11:13:46.323266  299665 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0203 11:13:46.382284  299665 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:13:46.382459  299665 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:13:46.382589  299665 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 11:13:46.389523  299665 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:13:46.395869  299665 out.go:235]   - Generating certificates and keys ...
	I0203 11:13:46.396001  299665 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:13:46.396102  299665 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:13:46.892881  299665 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 11:13:47.365260  299665 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 11:13:48.271633  299665 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 11:13:48.744012  299665 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 11:13:49.599048  299665 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 11:13:49.599356  299665 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-595492 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0203 11:13:50.028958  299665 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 11:13:50.029312  299665 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-595492 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0203 11:13:50.476620  299665 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 11:13:51.000635  299665 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 11:13:51.771680  299665 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 11:13:51.772019  299665 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:13:52.213158  299665 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:13:52.549708  299665 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 11:13:53.294730  299665 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:13:53.915176  299665 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:13:54.866719  299665 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:13:54.867433  299665 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:13:54.870298  299665 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:13:54.873670  299665 out.go:235]   - Booting up control plane ...
	I0203 11:13:54.873801  299665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:13:54.873909  299665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:13:54.873990  299665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:13:54.885766  299665 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:13:54.892417  299665 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:13:54.892473  299665 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:13:54.992578  299665 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 11:13:54.992715  299665 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 11:13:56.996390  299665 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.003643925s
	I0203 11:13:56.996484  299665 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 11:14:03.497905  299665 kubeadm.go:310] [api-check] The API server is healthy after 6.501779749s
	I0203 11:14:03.518858  299665 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 11:14:03.531668  299665 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 11:14:03.559680  299665 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 11:14:03.559878  299665 kubeadm.go:310] [mark-control-plane] Marking the node addons-595492 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 11:14:03.572232  299665 kubeadm.go:310] [bootstrap-token] Using token: zoq17c.s8zwp2lezgvv7fau
	I0203 11:14:03.575065  299665 out.go:235]   - Configuring RBAC rules ...
	I0203 11:14:03.575204  299665 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 11:14:03.581981  299665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 11:14:03.590537  299665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 11:14:03.595340  299665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 11:14:03.599182  299665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 11:14:03.603280  299665 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 11:14:03.905158  299665 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 11:14:04.348974  299665 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0203 11:14:04.905200  299665 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0203 11:14:04.906384  299665 kubeadm.go:310] 
	I0203 11:14:04.906465  299665 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0203 11:14:04.906472  299665 kubeadm.go:310] 
	I0203 11:14:04.906551  299665 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0203 11:14:04.906556  299665 kubeadm.go:310] 
	I0203 11:14:04.906581  299665 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0203 11:14:04.906647  299665 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 11:14:04.906698  299665 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 11:14:04.906702  299665 kubeadm.go:310] 
	I0203 11:14:04.906756  299665 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0203 11:14:04.906760  299665 kubeadm.go:310] 
	I0203 11:14:04.906808  299665 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 11:14:04.906816  299665 kubeadm.go:310] 
	I0203 11:14:04.906868  299665 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0203 11:14:04.906942  299665 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 11:14:04.907010  299665 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 11:14:04.907019  299665 kubeadm.go:310] 
	I0203 11:14:04.907104  299665 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 11:14:04.907180  299665 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0203 11:14:04.907185  299665 kubeadm.go:310] 
	I0203 11:14:04.907268  299665 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zoq17c.s8zwp2lezgvv7fau \
	I0203 11:14:04.907371  299665 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:66366b3baf96aefaa7536efd01011e5fdff175cc080f4002126ab4e5b12d5a43 \
	I0203 11:14:04.907391  299665 kubeadm.go:310] 	--control-plane 
	I0203 11:14:04.907395  299665 kubeadm.go:310] 
	I0203 11:14:04.907480  299665 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0203 11:14:04.907484  299665 kubeadm.go:310] 
	I0203 11:14:04.907566  299665 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zoq17c.s8zwp2lezgvv7fau \
	I0203 11:14:04.907668  299665 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:66366b3baf96aefaa7536efd01011e5fdff175cc080f4002126ab4e5b12d5a43 
	I0203 11:14:04.910024  299665 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0203 11:14:04.910261  299665 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0203 11:14:04.910371  299665 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:14:04.910392  299665 cni.go:84] Creating CNI manager for ""
	I0203 11:14:04.910401  299665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0203 11:14:04.913573  299665 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 11:14:04.916507  299665 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 11:14:04.920233  299665 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 11:14:04.920255  299665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 11:14:04.946142  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 11:14:05.233450  299665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:14:05.233600  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:05.233699  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-595492 minikube.k8s.io/updated_at=2025_02_03T11_14_05_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=addons-595492 minikube.k8s.io/primary=true
	I0203 11:14:05.250248  299665 ops.go:34] apiserver oom_adj: -16
	I0203 11:14:05.379094  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:05.880015  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:06.379227  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:06.879191  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:07.380160  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:07.879624  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:08.379488  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:08.879429  299665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:14:09.020347  299665 kubeadm.go:1113] duration metric: took 3.786797681s to wait for elevateKubeSystemPrivileges
	I0203 11:14:09.020461  299665 kubeadm.go:394] duration metric: took 22.891950831s to StartCluster
	I0203 11:14:09.020495  299665 settings.go:142] acquiring lock: {Name:mk385dcc27f9a798b64bf671a2b8c73360755841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:14:09.021181  299665 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:14:09.021669  299665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/kubeconfig: {Name:mke0801077b0d45eab0867ccfc31548cca121684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:14:09.022415  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 11:14:09.022482  299665 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:14:09.022683  299665 config.go:182] Loaded profile config "addons-595492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:14:09.022718  299665 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0203 11:14:09.022793  299665 addons.go:69] Setting yakd=true in profile "addons-595492"
	I0203 11:14:09.022814  299665 addons.go:238] Setting addon yakd=true in "addons-595492"
	I0203 11:14:09.022848  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.022910  299665 addons.go:69] Setting inspektor-gadget=true in profile "addons-595492"
	I0203 11:14:09.022937  299665 addons.go:238] Setting addon inspektor-gadget=true in "addons-595492"
	I0203 11:14:09.022960  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.023319  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.023391  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.023876  299665 addons.go:69] Setting metrics-server=true in profile "addons-595492"
	I0203 11:14:09.023908  299665 addons.go:238] Setting addon metrics-server=true in "addons-595492"
	I0203 11:14:09.023938  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.024401  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024546  299665 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-595492"
	I0203 11:14:09.024760  299665 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-595492"
	I0203 11:14:09.024809  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.024943  299665 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-595492"
	I0203 11:14:09.024970  299665 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-595492"
	I0203 11:14:09.024995  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.025428  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.026053  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.028955  299665 addons.go:69] Setting registry=true in profile "addons-595492"
	I0203 11:14:09.028991  299665 addons.go:238] Setting addon registry=true in "addons-595492"
	I0203 11:14:09.029025  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.029481  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024693  299665 addons.go:69] Setting cloud-spanner=true in profile "addons-595492"
	I0203 11:14:09.036358  299665 addons.go:238] Setting addon cloud-spanner=true in "addons-595492"
	I0203 11:14:09.036421  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.036982  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.037965  299665 addons.go:69] Setting storage-provisioner=true in profile "addons-595492"
	I0203 11:14:09.038001  299665 addons.go:238] Setting addon storage-provisioner=true in "addons-595492"
	I0203 11:14:09.038044  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.038519  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024705  299665 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-595492"
	I0203 11:14:09.041158  299665 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-595492"
	I0203 11:14:09.041244  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.041924  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.045217  299665 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-595492"
	I0203 11:14:09.045265  299665 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-595492"
	I0203 11:14:09.045617  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024709  299665 addons.go:69] Setting default-storageclass=true in profile "addons-595492"
	I0203 11:14:09.060681  299665 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-595492"
	I0203 11:14:09.061045  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.061204  299665 addons.go:69] Setting volcano=true in profile "addons-595492"
	I0203 11:14:09.061221  299665 addons.go:238] Setting addon volcano=true in "addons-595492"
	I0203 11:14:09.061249  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.061765  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.094169  299665 addons.go:69] Setting volumesnapshots=true in profile "addons-595492"
	I0203 11:14:09.094204  299665 addons.go:238] Setting addon volumesnapshots=true in "addons-595492"
	I0203 11:14:09.094251  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.094741  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024713  299665 addons.go:69] Setting gcp-auth=true in profile "addons-595492"
	I0203 11:14:09.096486  299665 mustload.go:65] Loading cluster: addons-595492
	I0203 11:14:09.096695  299665 config.go:182] Loaded profile config "addons-595492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:14:09.096956  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024717  299665 addons.go:69] Setting ingress=true in profile "addons-595492"
	I0203 11:14:09.104353  299665 addons.go:238] Setting addon ingress=true in "addons-595492"
	I0203 11:14:09.104409  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.105038  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.024720  299665 addons.go:69] Setting ingress-dns=true in profile "addons-595492"
	I0203 11:14:09.114713  299665 addons.go:238] Setting addon ingress-dns=true in "addons-595492"
	I0203 11:14:09.114770  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.115336  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.119042  299665 out.go:177] * Verifying Kubernetes components...
	I0203 11:14:09.122601  299665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:14:09.158895  299665 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0203 11:14:09.186269  299665 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0203 11:14:09.192438  299665 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0203 11:14:09.221816  299665 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0203 11:14:09.221886  299665 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0203 11:14:09.221996  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.236049  299665 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0203 11:14:09.238646  299665 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0203 11:14:09.241083  299665 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0203 11:14:09.241517  299665 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0203 11:14:09.244678  299665 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0203 11:14:09.246088  299665 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0203 11:14:09.246180  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.273232  299665 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0203 11:14:09.273296  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0203 11:14:09.273375  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.244692  299665 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 11:14:09.275639  299665 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 11:14:09.275714  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.282954  299665 out.go:177]   - Using image docker.io/registry:2.8.3
	I0203 11:14:09.284693  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.300871  299665 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0203 11:14:09.300892  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0203 11:14:09.300955  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.348671  299665 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:14:09.349034  299665 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0203 11:14:09.349048  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0203 11:14:09.349131  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	W0203 11:14:09.349756  299665 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0203 11:14:09.351065  299665 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0203 11:14:09.351079  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0203 11:14:09.351139  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.354992  299665 addons.go:238] Setting addon default-storageclass=true in "addons-595492"
	I0203 11:14:09.355032  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.355545  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.358646  299665 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:14:09.358669  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:14:09.358727  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.368617  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0203 11:14:09.369917  299665 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-595492"
	I0203 11:14:09.369959  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:09.370476  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:09.392145  299665 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0203 11:14:09.395915  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0203 11:14:09.396067  299665 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0203 11:14:09.404747  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.406757  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.417189  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0203 11:14:09.427760  299665 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0203 11:14:09.427783  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0203 11:14:09.427858  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.430052  299665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0203 11:14:09.435209  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.442711  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0203 11:14:09.446829  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0203 11:14:09.446958  299665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 11:14:09.449712  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0203 11:14:09.460020  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0203 11:14:09.462901  299665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 11:14:09.468619  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0203 11:14:09.468929  299665 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0203 11:14:09.468941  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0203 11:14:09.469006  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.485341  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0203 11:14:09.486708  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.491487  299665 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0203 11:14:09.495695  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0203 11:14:09.495728  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0203 11:14:09.495797  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.547958  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.578511  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.579050  299665 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:14:09.579094  299665 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:14:09.579162  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.579356  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.613875  299665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:14:09.614069  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 11:14:09.614169  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.621786  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.654127  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.654897  299665 out.go:177]   - Using image docker.io/busybox:stable
	I0203 11:14:09.665382  299665 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0203 11:14:09.668118  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.668552  299665 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0203 11:14:09.668629  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0203 11:14:09.668702  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:09.684910  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.722425  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.724440  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.729113  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:09.806154  299665 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0203 11:14:09.806227  299665 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0203 11:14:09.865101  299665 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 11:14:09.865127  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0203 11:14:09.927153  299665 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0203 11:14:09.927174  299665 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0203 11:14:09.930043  299665 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0203 11:14:09.930062  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0203 11:14:09.934780  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0203 11:14:10.076343  299665 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 11:14:10.076436  299665 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 11:14:10.108929  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0203 11:14:10.114945  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0203 11:14:10.154151  299665 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0203 11:14:10.154222  299665 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0203 11:14:10.157719  299665 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0203 11:14:10.157758  299665 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0203 11:14:10.185246  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0203 11:14:10.213838  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:14:10.219057  299665 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:14:10.219093  299665 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 11:14:10.237592  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0203 11:14:10.237668  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0203 11:14:10.249549  299665 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0203 11:14:10.249578  299665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0203 11:14:10.264641  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0203 11:14:10.274999  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0203 11:14:10.280004  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:14:10.294192  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0203 11:14:10.312847  299665 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0203 11:14:10.312871  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0203 11:14:10.320646  299665 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0203 11:14:10.320678  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0203 11:14:10.375993  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:14:10.391421  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0203 11:14:10.391446  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0203 11:14:10.394668  299665 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0203 11:14:10.394695  299665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0203 11:14:10.461739  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0203 11:14:10.486344  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0203 11:14:10.557445  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0203 11:14:10.557490  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0203 11:14:10.561524  299665 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0203 11:14:10.561552  299665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0203 11:14:10.733799  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0203 11:14:10.733835  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0203 11:14:10.760304  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0203 11:14:10.760343  299665 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0203 11:14:10.967471  299665 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0203 11:14:10.967508  299665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0203 11:14:11.044930  299665 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 11:14:11.044954  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0203 11:14:11.147227  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 11:14:11.178446  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0203 11:14:11.178474  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0203 11:14:11.242508  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0203 11:14:11.242537  299665 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0203 11:14:11.305307  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0203 11:14:11.305330  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0203 11:14:11.367445  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0203 11:14:11.367477  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0203 11:14:11.457140  299665 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0203 11:14:11.457216  299665 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0203 11:14:11.628050  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0203 11:14:11.658452  299665 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.044347787s)
	I0203 11:14:11.658560  299665 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0203 11:14:11.658534  299665 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.044636401s)
	I0203 11:14:11.660403  299665 node_ready.go:35] waiting up to 6m0s for node "addons-595492" to be "Ready" ...
	I0203 11:14:13.271427  299665 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-595492" context rescaled to 1 replicas
	I0203 11:14:14.000655  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:14.326464  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.391646101s)
	I0203 11:14:14.326536  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.217538656s)
	I0203 11:14:14.326565  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.211544677s)
	I0203 11:14:15.490500  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.30520857s)
	I0203 11:14:15.490673  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.276763674s)
	I0203 11:14:16.179396  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:16.201459  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.936772024s)
	I0203 11:14:16.201493  299665 addons.go:479] Verifying addon ingress=true in "addons-595492"
	I0203 11:14:16.201691  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.926663985s)
	I0203 11:14:16.201729  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.921702667s)
	I0203 11:14:16.201952  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.907734531s)
	I0203 11:14:16.202115  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.826095976s)
	I0203 11:14:16.202134  299665 addons.go:479] Verifying addon metrics-server=true in "addons-595492"
	I0203 11:14:16.202172  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.740401633s)
	I0203 11:14:16.202185  299665 addons.go:479] Verifying addon registry=true in "addons-595492"
	I0203 11:14:16.202779  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.716389466s)
	I0203 11:14:16.202910  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.055633611s)
	W0203 11:14:16.202950  299665 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0203 11:14:16.202977  299665 retry.go:31] will retry after 361.749596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0203 11:14:16.206490  299665 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-595492 service yakd-dashboard -n yakd-dashboard
	
	I0203 11:14:16.206504  299665 out.go:177] * Verifying registry addon...
	I0203 11:14:16.206532  299665 out.go:177] * Verifying ingress addon...
	I0203 11:14:16.211366  299665 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0203 11:14:16.212210  299665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0203 11:14:16.217020  299665 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0203 11:14:16.230221  299665 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0203 11:14:16.230253  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:16.231001  299665 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0203 11:14:16.231016  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:16.418413  299665 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0203 11:14:16.418516  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:16.441804  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:16.547126  299665 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0203 11:14:16.554508  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.92636801s)
	I0203 11:14:16.554591  299665 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-595492"
	I0203 11:14:16.559762  299665 out.go:177] * Verifying csi-hostpath-driver addon...
	I0203 11:14:16.563549  299665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0203 11:14:16.565134  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 11:14:16.573839  299665 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0203 11:14:16.573913  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:16.587859  299665 addons.go:238] Setting addon gcp-auth=true in "addons-595492"
	I0203 11:14:16.587964  299665 host.go:66] Checking if "addons-595492" exists ...
	I0203 11:14:16.588503  299665 cli_runner.go:164] Run: docker container inspect addons-595492 --format={{.State.Status}}
	I0203 11:14:16.616123  299665 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0203 11:14:16.616176  299665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-595492
	I0203 11:14:16.639967  299665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/addons-595492/id_rsa Username:docker}
	I0203 11:14:16.716691  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:16.717786  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:17.066805  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:17.216112  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:17.216373  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:17.567269  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:17.715708  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:17.716222  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:18.067758  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:18.215154  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:18.215813  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:18.566900  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:18.664412  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:18.716240  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:18.716397  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:19.067743  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:19.216614  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:19.217137  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:19.262081  299665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.696887214s)
	I0203 11:14:19.262171  299665 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.646020716s)
	I0203 11:14:19.265312  299665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 11:14:19.268179  299665 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0203 11:14:19.271158  299665 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0203 11:14:19.271222  299665 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0203 11:14:19.290880  299665 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0203 11:14:19.290903  299665 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0203 11:14:19.309273  299665 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0203 11:14:19.309300  299665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0203 11:14:19.327657  299665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0203 11:14:19.567814  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:19.719397  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:19.720460  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:19.824710  299665 addons.go:479] Verifying addon gcp-auth=true in "addons-595492"
	I0203 11:14:19.827871  299665 out.go:177] * Verifying gcp-auth addon...
	I0203 11:14:19.830687  299665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0203 11:14:19.844511  299665 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0203 11:14:19.844538  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:20.067459  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:20.215753  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:20.216538  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:20.336190  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:20.567715  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:20.715417  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:20.716162  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:20.834882  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:21.067963  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:21.163398  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:21.215112  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:21.215751  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:21.334360  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:21.566889  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:21.715728  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:21.716994  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:21.834317  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:22.067269  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:22.215342  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:22.215546  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:22.334577  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:22.567612  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:22.715550  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:22.716223  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:22.834714  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:23.066987  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:23.164269  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:23.215559  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:23.216399  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:23.335006  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:23.567867  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:23.715703  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:23.716462  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:23.834493  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:24.067850  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:24.215582  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:24.217191  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:24.334795  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:24.566787  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:24.715256  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:24.716009  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:24.834664  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:25.067244  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:25.216358  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:25.216676  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:25.334217  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:25.567627  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:25.664045  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:25.715170  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:25.715820  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:25.834409  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:26.067245  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:26.215652  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:26.216396  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:26.334749  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:26.567216  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:26.715043  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:26.716607  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:26.834446  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:27.067638  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:27.215937  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:27.216635  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:27.334469  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:27.566748  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:27.714996  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:27.715861  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:27.834182  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:28.067660  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:28.163939  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:28.216172  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:28.216459  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:28.334495  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:28.567775  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:28.715337  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:28.716305  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:28.834449  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:29.066695  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:29.216032  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:29.216802  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:29.334321  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:29.567168  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:29.714830  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:29.715347  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:29.834413  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:30.071897  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:30.164906  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:30.215358  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:30.216288  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:30.334712  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:30.567918  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:30.714730  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:30.715467  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:30.834286  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:31.067866  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:31.215106  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:31.216244  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:31.335733  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:31.568880  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:31.715860  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:31.716167  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:31.833929  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:32.067502  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:32.215249  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:32.216553  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:32.334307  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:32.567826  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:32.664774  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:32.715281  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:32.715850  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:32.834091  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:33.067619  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:33.215762  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:33.216708  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:33.334585  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:33.568428  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:33.715205  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:33.716358  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:33.834076  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:34.067887  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:34.214822  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:34.215531  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:34.334574  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:34.566697  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:34.715409  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:34.716157  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:34.835160  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:35.068010  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:35.164327  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:35.215859  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:35.216969  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:35.334657  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:35.566908  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:35.715960  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:35.716512  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:35.834338  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:36.067464  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:36.215177  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:36.215907  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:36.334373  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:36.567661  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:36.715320  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:36.716052  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:36.834757  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:37.067578  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:37.215860  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:37.216742  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:37.334097  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:37.567413  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:37.663460  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:37.715550  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:37.717193  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:37.840042  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:38.067993  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:38.215363  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:38.216115  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:38.334406  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:38.566795  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:38.715143  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:38.715755  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:38.834257  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:39.068386  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:39.215502  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:39.216308  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:39.335177  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:39.567679  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:39.664283  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:39.715822  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:39.716190  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:39.835451  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:40.067973  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:40.215628  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:40.216974  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:40.334186  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:40.567597  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:40.715843  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:40.716628  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:40.834054  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:41.067640  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:41.216265  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:41.216495  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:41.334716  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:41.567270  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:41.715559  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:41.716543  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:41.834676  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:42.067834  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:42.164818  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:42.215456  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:42.216058  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:42.334395  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:42.567708  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:42.715306  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:42.716306  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:42.834477  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:43.067374  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:43.216184  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:43.216975  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:43.333863  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:43.567584  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:43.716744  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:43.717519  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:43.834616  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:44.067893  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:44.164954  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:44.215817  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:44.217423  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:44.334544  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:44.566760  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:44.715791  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:44.716677  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:44.834166  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:45.068888  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:45.216440  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:45.217285  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:45.335101  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:45.568007  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:45.715512  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:45.715652  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:45.834791  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:46.067500  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:46.216309  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:46.217698  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:46.334048  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:46.567045  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:46.664421  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:46.715428  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:46.715824  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:46.834740  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:47.067798  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:47.216842  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:47.217307  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:47.334514  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:47.566968  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:47.715929  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:47.716882  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:47.834717  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:48.069844  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:48.216529  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:48.217118  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:48.334899  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:48.567933  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:48.715634  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:48.715745  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:48.833891  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:49.066871  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:49.164125  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:49.215147  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:49.216751  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:49.334036  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:49.567779  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:49.715502  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:49.716549  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:49.833826  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:50.067607  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:50.215620  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:50.216662  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:50.333977  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:50.566860  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:50.715906  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:50.716671  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:50.833915  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:51.067703  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:51.215509  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:51.216876  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:51.334420  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:51.567313  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:51.663762  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:51.715960  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:51.716749  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:51.834604  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:52.067325  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:52.215580  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:52.216341  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:52.333847  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:52.566977  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:52.715910  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:52.716806  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:52.834214  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:53.067915  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:53.215684  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:53.218148  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:53.334336  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:53.567313  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:53.664653  299665 node_ready.go:53] node "addons-595492" has status "Ready":"False"
	I0203 11:14:53.715841  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:53.716360  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:53.835246  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:54.067667  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:54.216040  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:54.216787  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:54.334402  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:54.567060  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:54.716202  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:54.716431  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:54.834457  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:55.067852  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:55.215891  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:55.217725  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:55.334702  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:55.567440  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:55.715350  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:55.715915  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:55.879520  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:56.145222  299665 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0203 11:14:56.145252  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:56.166412  299665 node_ready.go:49] node "addons-595492" has status "Ready":"True"
	I0203 11:14:56.166440  299665 node_ready.go:38] duration metric: took 44.505899798s for node "addons-595492" to be "Ready" ...
	I0203 11:14:56.166452  299665 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:14:56.175588  299665 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rkfp9" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:56.249491  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:56.250721  299665 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0203 11:14:56.250747  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:56.339462  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:56.569820  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:56.721650  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:56.721983  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:56.836133  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:57.091417  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:57.215897  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:57.216866  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:57.334283  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:57.569612  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:57.684553  299665 pod_ready.go:93] pod "coredns-668d6bf9bc-rkfp9" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:57.684610  299665 pod_ready.go:82] duration metric: took 1.508979876s for pod "coredns-668d6bf9bc-rkfp9" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.684636  299665 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.693771  299665 pod_ready.go:93] pod "etcd-addons-595492" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:57.693799  299665 pod_ready.go:82] duration metric: took 9.154682ms for pod "etcd-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.693814  299665 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.700546  299665 pod_ready.go:93] pod "kube-apiserver-addons-595492" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:57.700602  299665 pod_ready.go:82] duration metric: took 6.779282ms for pod "kube-apiserver-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.700633  299665 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.706557  299665 pod_ready.go:93] pod "kube-controller-manager-addons-595492" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:57.706583  299665 pod_ready.go:82] duration metric: took 5.935741ms for pod "kube-controller-manager-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.706597  299665 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fc6ln" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.718515  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:57.719342  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:57.765221  299665 pod_ready.go:93] pod "kube-proxy-fc6ln" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:57.765294  299665 pod_ready.go:82] duration metric: took 58.68808ms for pod "kube-proxy-fc6ln" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.765321  299665 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:57.834055  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:58.069261  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:58.165181  299665 pod_ready.go:93] pod "kube-scheduler-addons-595492" in "kube-system" namespace has status "Ready":"True"
	I0203 11:14:58.165222  299665 pod_ready.go:82] duration metric: took 399.878636ms for pod "kube-scheduler-addons-595492" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:58.165253  299665 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace to be "Ready" ...
	I0203 11:14:58.217521  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:58.221678  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:58.336437  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:58.568865  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:58.716599  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:58.716911  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:58.837226  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:59.069466  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:59.217882  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:59.218934  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:59.334781  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:14:59.569249  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:14:59.716146  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:14:59.718025  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:14:59.836406  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:00.087626  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:00.216378  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:00.227029  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:00.240515  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:00.352978  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:00.573583  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:00.728757  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:00.729979  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:00.836873  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:01.070065  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:01.219743  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:01.221191  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:01.342859  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:01.569198  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:01.716967  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:01.718917  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:01.834717  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:02.070763  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:02.218780  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:02.220516  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:02.334532  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:02.568760  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:02.678726  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:02.715436  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:02.718607  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:02.834933  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:03.069028  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:03.221691  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:03.222281  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:03.334675  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:03.568815  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:03.715630  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:03.719016  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:03.834613  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:04.069471  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:04.216290  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:04.217179  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:04.334692  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:04.569077  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:04.717111  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:04.717284  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:04.836390  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:05.072298  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:05.173510  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:05.218892  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:05.221485  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:05.335255  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:05.570023  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:05.748462  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:05.750153  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:05.842370  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:06.069758  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:06.218106  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:06.219539  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:06.335598  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:06.570025  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:06.717734  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:06.719037  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:06.835318  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:07.068737  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:07.241102  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:07.242619  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:07.334242  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:07.580884  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:07.673089  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:07.717260  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:07.718026  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:07.835677  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:08.069037  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:08.227488  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:08.228412  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:08.334651  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:08.569555  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:08.722342  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:08.723670  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:08.833924  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:09.068142  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:09.217849  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:09.217965  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:09.338408  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:09.567977  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:09.685554  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:09.718470  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:09.720383  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:09.834939  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:10.071524  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:10.217754  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:10.218430  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:10.335459  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:10.573932  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:10.720682  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:10.721664  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:10.834487  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:11.069316  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:11.216166  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:11.217682  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:11.334794  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:11.569198  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:11.716331  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:11.719257  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:11.834610  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:12.069645  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:12.173032  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:12.217471  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:12.218463  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:12.334858  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:12.568975  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:12.718297  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:12.721286  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:12.835758  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:13.069548  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:13.217077  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:13.217672  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:13.334267  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:13.568712  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:13.716124  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:13.716460  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:13.834806  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:14.069204  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:14.216460  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:14.216778  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:14.334176  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:14.569401  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:14.671674  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:14.716441  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:14.716599  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:14.834928  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:15.069206  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:15.216145  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:15.216790  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:15.335006  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:15.570780  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:15.719831  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:15.721521  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:15.834684  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:16.070328  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:16.217704  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:16.219473  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:16.335037  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:16.568597  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:16.715388  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:16.717132  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:16.834981  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:17.069336  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:17.175567  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:17.216978  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:17.218155  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:17.336290  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:17.568872  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:17.718396  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:17.719052  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:17.835460  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:18.069835  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:18.216986  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:18.218640  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:18.334227  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:18.568644  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:18.718643  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:18.719459  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:18.835475  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:19.071266  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:19.175712  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:19.221672  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:19.223191  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:19.339034  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:19.568841  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:19.718574  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:19.720438  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:19.834512  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:20.072857  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:20.216173  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:20.218575  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:20.334603  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:20.569668  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:20.718855  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:20.720887  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:20.839247  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:21.077973  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:21.188403  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:21.217090  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:21.224159  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:21.336620  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:21.571701  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:21.718117  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:21.718395  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:21.834938  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:22.069053  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:22.224235  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:22.225806  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:22.335156  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:22.568149  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:22.716604  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:22.717429  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:22.834930  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:23.069408  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:23.216357  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:23.216862  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:23.336197  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:23.568822  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:23.672003  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:23.715751  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:23.716765  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:23.834830  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:24.069804  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:24.217727  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:24.218269  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:24.334843  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:24.569160  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:24.717395  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:24.719827  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:24.835357  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:25.070426  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:25.220108  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:25.221537  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:25.341691  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:25.570370  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:25.676728  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:25.719948  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:25.721956  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:25.834894  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:26.070024  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:26.217883  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:26.219573  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:26.334788  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:26.570025  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:26.718406  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:26.720031  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:26.835835  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:27.069487  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:27.216730  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:27.217431  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:27.335038  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:27.568978  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:27.719186  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:27.720685  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:27.840406  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:28.069674  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:28.172263  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:28.217265  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:28.220075  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:28.335819  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:28.574722  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:28.717281  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:28.717985  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:28.839175  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:29.068634  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:29.232530  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:29.233400  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:29.335107  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:29.568624  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:29.717402  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:29.718552  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:29.835045  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:30.070176  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:30.172733  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:30.218157  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:30.219847  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:30.335102  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:30.569331  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:30.719911  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:30.720196  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:30.835118  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:31.069788  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:31.216286  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:31.217056  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:31.336250  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:31.570410  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:31.721993  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:31.724078  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:31.836681  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:32.069969  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:32.173830  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:32.218416  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:32.219785  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:32.335352  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:32.569498  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:32.717299  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:32.718003  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:32.836784  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:33.069850  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:33.218116  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:33.219952  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:33.335881  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:33.570186  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:33.717376  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:33.718063  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:33.835607  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:34.074778  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:34.217919  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:34.220502  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:34.341620  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:34.570933  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:34.672126  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:34.717110  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:34.717609  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:34.834657  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:35.069408  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:35.216592  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:35.217497  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:35.335329  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:35.569578  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:35.718235  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:35.719813  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:35.837818  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:36.070763  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:36.216984  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:36.219890  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:36.336306  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:36.573652  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:36.685644  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:36.728880  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:36.730475  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:36.834768  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:37.070027  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:37.217936  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:37.218347  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:37.335153  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:37.572273  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:37.723987  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:37.726651  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:37.835477  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:38.071940  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:38.215776  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:38.217527  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:38.334294  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:38.570563  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:38.717147  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:38.717779  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:38.834668  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:39.068819  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:39.176632  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:39.225292  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:39.226354  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:39.334977  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:39.569581  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:39.716660  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:39.718373  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:39.835710  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:40.070485  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:40.216886  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:40.217629  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:40.333863  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:40.568789  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:40.720527  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:40.721551  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:40.842146  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:41.069862  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:41.217683  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:41.218017  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:41.334458  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:41.569776  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:41.671671  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:41.717842  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:41.719054  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:41.833917  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:42.069145  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:42.218549  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:42.219912  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:42.337534  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:42.569937  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:42.716223  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:42.718306  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:42.834787  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:43.069492  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:43.216253  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:43.217953  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:43.334586  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:43.570707  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:43.672075  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:43.716822  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:43.717696  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:43.834506  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:44.068195  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:44.215967  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:44.216404  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:44.335007  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:44.568121  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:44.716320  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:44.717772  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:44.835768  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:45.084085  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:45.234967  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:45.235634  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:45.336959  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:45.569776  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:45.673857  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:45.718948  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:45.719200  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:45.835386  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:46.068755  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:46.219029  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:46.220432  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:46.335142  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:46.574254  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:46.717339  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:46.718597  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:46.834425  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:47.070789  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:47.216260  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:47.222583  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:47.334392  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:47.598373  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:47.716929  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:47.718018  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:47.835002  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:48.075404  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:48.185534  299665 pod_ready.go:103] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"False"
	I0203 11:15:48.218882  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:48.222174  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:48.335417  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:48.570043  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:48.676661  299665 pod_ready.go:93] pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace has status "Ready":"True"
	I0203 11:15:48.676689  299665 pod_ready.go:82] duration metric: took 50.511418254s for pod "metrics-server-7fbb699795-kdlwk" in "kube-system" namespace to be "Ready" ...
	I0203 11:15:48.676702  299665 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bbdcr" in "kube-system" namespace to be "Ready" ...
	I0203 11:15:48.685252  299665 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bbdcr" in "kube-system" namespace has status "Ready":"True"
	I0203 11:15:48.685280  299665 pod_ready.go:82] duration metric: took 8.568651ms for pod "nvidia-device-plugin-daemonset-bbdcr" in "kube-system" namespace to be "Ready" ...
	I0203 11:15:48.685305  299665 pod_ready.go:39] duration metric: took 52.518839992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:15:48.685330  299665 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:15:48.685438  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:15:48.685516  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:15:48.751253  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:48.751915  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:48.779904  299665 cri.go:89] found id: "7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:15:48.779928  299665 cri.go:89] found id: ""
	I0203 11:15:48.779936  299665 logs.go:282] 1 containers: [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f]
	I0203 11:15:48.779992  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:48.791344  299665 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:15:48.791415  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:15:48.839513  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:48.874843  299665 cri.go:89] found id: "3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:15:48.874913  299665 cri.go:89] found id: ""
	I0203 11:15:48.874926  299665 logs.go:282] 1 containers: [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae]
	I0203 11:15:48.875032  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:48.884386  299665 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:15:48.884551  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:15:49.042423  299665 cri.go:89] found id: "1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:15:49.042493  299665 cri.go:89] found id: ""
	I0203 11:15:49.042516  299665 logs.go:282] 1 containers: [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d]
	I0203 11:15:49.042606  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:49.053154  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:15:49.053279  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:15:49.069569  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:49.218984  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:49.220897  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:49.235275  299665 cri.go:89] found id: "862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:15:49.235298  299665 cri.go:89] found id: ""
	I0203 11:15:49.235307  299665 logs.go:282] 1 containers: [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb]
	I0203 11:15:49.235393  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:49.247830  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:15:49.247927  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:15:49.334937  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:49.362945  299665 cri.go:89] found id: "04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:15:49.362968  299665 cri.go:89] found id: ""
	I0203 11:15:49.362976  299665 logs.go:282] 1 containers: [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e]
	I0203 11:15:49.363061  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:49.375206  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:15:49.375301  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:15:49.455149  299665 cri.go:89] found id: "725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:15:49.455172  299665 cri.go:89] found id: ""
	I0203 11:15:49.455181  299665 logs.go:282] 1 containers: [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806]
	I0203 11:15:49.455255  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:49.461212  299665 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:15:49.461308  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:15:49.569705  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:49.579862  299665 cri.go:89] found id: "f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:15:49.579885  299665 cri.go:89] found id: ""
	I0203 11:15:49.579894  299665 logs.go:282] 1 containers: [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079]
	I0203 11:15:49.579974  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:15:49.585298  299665 logs.go:123] Gathering logs for coredns [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d] ...
	I0203 11:15:49.585328  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:15:49.653809  299665 logs.go:123] Gathering logs for kube-scheduler [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb] ...
	I0203 11:15:49.653840  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:15:49.718140  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:49.719382  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:49.731910  299665 logs.go:123] Gathering logs for kube-controller-manager [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806] ...
	I0203 11:15:49.731982  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:15:49.820340  299665 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:15:49.820376  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:15:49.836011  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:49.941884  299665 logs.go:123] Gathering logs for dmesg ...
	I0203 11:15:49.941924  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:15:49.961176  299665 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:15:49.961205  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 11:15:50.069829  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:50.206202  299665 logs.go:123] Gathering logs for etcd [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae] ...
	I0203 11:15:50.206281  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:15:50.217834  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:50.219653  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:50.300484  299665 logs.go:123] Gathering logs for kindnet [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079] ...
	I0203 11:15:50.302309  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:15:50.333805  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:50.365496  299665 logs.go:123] Gathering logs for container status ...
	I0203 11:15:50.365525  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:15:50.435808  299665 logs.go:123] Gathering logs for kubelet ...
	I0203 11:15:50.435881  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0203 11:15:50.518440  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: W0203 11:14:08.879934    1510 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.518716  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: E0203 11:14:08.879992    1510 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:15:50.543968  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.818138    1510 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.544241  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:15:50.544468  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.544728  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:15:50.544930  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.545166  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:15:50.574234  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:50.581228  299665 logs.go:123] Gathering logs for kube-apiserver [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f] ...
	I0203 11:15:50.581262  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:15:50.723077  299665 logs.go:123] Gathering logs for kube-proxy [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e] ...
	I0203 11:15:50.723109  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:15:50.728831  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:50.729904  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:50.805655  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:15:50.805680  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0203 11:15:50.805744  299665 out.go:270] X Problems detected in kubelet:
	W0203 11:15:50.805760  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:15:50.805768  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.805793  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:15:50.805806  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:15:50.805812  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:15:50.805823  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:15:50.805830  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:15:50.835106  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:51.070052  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:51.216804  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 11:15:51.218107  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:51.334474  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:51.568757  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:51.718018  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:51.818548  299665 kapi.go:107] duration metric: took 1m35.60633165s to wait for kubernetes.io/minikube-addons=registry ...
	I0203 11:15:51.835971  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:52.069285  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:52.215922  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:52.334832  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:52.569571  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:52.718390  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:52.835349  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:53.068471  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:53.216299  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:53.334537  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:53.568682  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:53.716464  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:53.834794  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:54.069500  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:54.215556  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:54.337064  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:54.569203  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:54.716189  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:54.834473  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:55.069663  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:55.218192  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:55.335623  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:55.568433  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:55.717199  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:55.838885  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:56.068937  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:56.216659  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:56.334419  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:56.570920  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:56.720114  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:56.834973  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:57.076452  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:57.217752  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:57.334328  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 11:15:57.568476  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:57.716617  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:57.834991  299665 kapi.go:107] duration metric: took 1m38.004301882s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0203 11:15:57.838062  299665 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-595492 cluster.
	I0203 11:15:57.841163  299665 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0203 11:15:57.843958  299665 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0203 11:15:58.068508  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:58.216663  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:58.570011  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:58.716148  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:59.069686  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:59.216272  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:15:59.567799  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:15:59.716393  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:00.069352  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:00.225608  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:00.571976  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:00.716693  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:00.806997  299665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:16:00.831881  299665 api_server.go:72] duration metric: took 1m51.809341461s to wait for apiserver process to appear ...
	I0203 11:16:00.831961  299665 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:16:00.832011  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:16:00.832108  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:16:00.896226  299665 cri.go:89] found id: "7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:16:00.896293  299665 cri.go:89] found id: ""
	I0203 11:16:00.896315  299665 logs.go:282] 1 containers: [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f]
	I0203 11:16:00.896412  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:00.900646  299665 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:16:00.900768  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:16:00.985212  299665 cri.go:89] found id: "3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:16:00.985279  299665 cri.go:89] found id: ""
	I0203 11:16:00.985301  299665 logs.go:282] 1 containers: [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae]
	I0203 11:16:00.985407  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:00.996968  299665 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:16:00.997115  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:16:01.072302  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:01.131134  299665 cri.go:89] found id: "1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:16:01.131213  299665 cri.go:89] found id: ""
	I0203 11:16:01.131239  299665 logs.go:282] 1 containers: [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d]
	I0203 11:16:01.131329  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:01.138623  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:16:01.138760  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:16:01.209461  299665 cri.go:89] found id: "862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:16:01.209490  299665 cri.go:89] found id: ""
	I0203 11:16:01.209508  299665 logs.go:282] 1 containers: [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb]
	I0203 11:16:01.209576  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:01.218821  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:01.222252  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:16:01.222357  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:16:01.287826  299665 cri.go:89] found id: "04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:16:01.287854  299665 cri.go:89] found id: ""
	I0203 11:16:01.287862  299665 logs.go:282] 1 containers: [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e]
	I0203 11:16:01.287934  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:01.292147  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:16:01.292232  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:16:01.351704  299665 cri.go:89] found id: "725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:16:01.351739  299665 cri.go:89] found id: ""
	I0203 11:16:01.351748  299665 logs.go:282] 1 containers: [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806]
	I0203 11:16:01.351818  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:01.355587  299665 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:16:01.355679  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:16:01.424425  299665 cri.go:89] found id: "f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:16:01.424448  299665 cri.go:89] found id: ""
	I0203 11:16:01.424457  299665 logs.go:282] 1 containers: [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079]
	I0203 11:16:01.424520  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:01.436284  299665 logs.go:123] Gathering logs for coredns [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d] ...
	I0203 11:16:01.436313  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:16:01.504336  299665 logs.go:123] Gathering logs for kubelet ...
	I0203 11:16:01.504367  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:16:01.580479  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0203 11:16:01.590122  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: W0203 11:14:08.879934    1510 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-595492' and this object
	W0203 11:16:01.590360  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: E0203 11:14:08.879992    1510 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:01.616344  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.818138    1510 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:01.616614  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:01.616808  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:01.617041  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:01.617214  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:16:01.617430  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:16:01.652372  299665 logs.go:123] Gathering logs for dmesg ...
	I0203 11:16:01.652409  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:16:01.668894  299665 logs.go:123] Gathering logs for kube-apiserver [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f] ...
	I0203 11:16:01.668924  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:16:01.720638  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:01.748201  299665 logs.go:123] Gathering logs for etcd [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae] ...
	I0203 11:16:01.748283  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:16:01.852652  299665 logs.go:123] Gathering logs for kindnet [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079] ...
	I0203 11:16:01.852730  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:16:01.920123  299665 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:16:01.920279  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:16:02.031751  299665 logs.go:123] Gathering logs for container status ...
	I0203 11:16:02.031862  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:16:02.068943  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:02.113531  299665 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:16:02.113611  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 11:16:02.225880  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:02.312379  299665 logs.go:123] Gathering logs for kube-scheduler [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb] ...
	I0203 11:16:02.312413  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:16:02.382305  299665 logs.go:123] Gathering logs for kube-proxy [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e] ...
	I0203 11:16:02.382336  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:16:02.453978  299665 logs.go:123] Gathering logs for kube-controller-manager [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806] ...
	I0203 11:16:02.454053  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:16:02.572246  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:02.597278  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:16:02.597311  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0203 11:16:02.597380  299665 out.go:270] X Problems detected in kubelet:
	W0203 11:16:02.597390  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:02.597398  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:02.597404  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:02.597410  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:16:02.597420  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:16:02.597519  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:16:02.597527  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:16:02.716659  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:03.069684  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:03.217960  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:03.569713  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:03.719875  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:04.070817  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:04.216964  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:04.568246  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:04.716161  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:05.068811  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:05.216184  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:05.576519  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:05.718853  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:06.070182  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:06.215583  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:06.568811  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:06.717130  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:07.068787  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:07.215246  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:07.568811  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:07.715975  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:08.073174  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:08.216053  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:08.568856  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:08.716090  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:09.068725  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:09.216670  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:09.569007  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:09.716729  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:10.070280  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:10.218725  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:10.568584  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:10.715968  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:11.069134  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:11.216165  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:11.571277  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:11.715305  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:12.069847  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:12.216422  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:12.570998  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:12.599265  299665 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0203 11:16:12.608674  299665 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0203 11:16:12.610384  299665 api_server.go:141] control plane version: v1.32.1
	I0203 11:16:12.610411  299665 api_server.go:131] duration metric: took 11.778428186s to wait for apiserver health ...
	I0203 11:16:12.610420  299665 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:16:12.610449  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:16:12.610511  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:16:12.672612  299665 cri.go:89] found id: "7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:16:12.672634  299665 cri.go:89] found id: ""
	I0203 11:16:12.672642  299665 logs.go:282] 1 containers: [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f]
	I0203 11:16:12.672712  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:12.677300  299665 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:16:12.677380  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:16:12.717749  299665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 11:16:12.745423  299665 cri.go:89] found id: "3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:16:12.745463  299665 cri.go:89] found id: ""
	I0203 11:16:12.745473  299665 logs.go:282] 1 containers: [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae]
	I0203 11:16:12.745549  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:12.750829  299665 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:16:12.750918  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:16:12.810992  299665 cri.go:89] found id: "1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:16:12.811015  299665 cri.go:89] found id: ""
	I0203 11:16:12.811023  299665 logs.go:282] 1 containers: [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d]
	I0203 11:16:12.811084  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:12.816470  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:16:12.816558  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:16:12.893127  299665 cri.go:89] found id: "862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:16:12.893161  299665 cri.go:89] found id: ""
	I0203 11:16:12.893170  299665 logs.go:282] 1 containers: [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb]
	I0203 11:16:12.893235  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:12.902010  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:16:12.902098  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:16:12.959574  299665 cri.go:89] found id: "04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:16:12.959598  299665 cri.go:89] found id: ""
	I0203 11:16:12.959606  299665 logs.go:282] 1 containers: [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e]
	I0203 11:16:12.959678  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:12.963376  299665 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:16:12.963451  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:16:13.004281  299665 cri.go:89] found id: "725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:16:13.004305  299665 cri.go:89] found id: ""
	I0203 11:16:13.004318  299665 logs.go:282] 1 containers: [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806]
	I0203 11:16:13.004388  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:13.011488  299665 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:16:13.011584  299665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:16:13.055359  299665 cri.go:89] found id: "f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:16:13.055396  299665 cri.go:89] found id: ""
	I0203 11:16:13.055405  299665 logs.go:282] 1 containers: [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079]
	I0203 11:16:13.055470  299665 ssh_runner.go:195] Run: which crictl
	I0203 11:16:13.059366  299665 logs.go:123] Gathering logs for kube-scheduler [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb] ...
	I0203 11:16:13.059390  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb"
	I0203 11:16:13.069891  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:13.104542  299665 logs.go:123] Gathering logs for kube-proxy [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e] ...
	I0203 11:16:13.104620  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e"
	I0203 11:16:13.150838  299665 logs.go:123] Gathering logs for kindnet [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079] ...
	I0203 11:16:13.150877  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079"
	I0203 11:16:13.190133  299665 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:16:13.190210  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:16:13.217787  299665 kapi.go:107] duration metric: took 1m57.006416543s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0203 11:16:13.283902  299665 logs.go:123] Gathering logs for container status ...
	I0203 11:16:13.283940  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:16:13.339052  299665 logs.go:123] Gathering logs for kube-apiserver [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f] ...
	I0203 11:16:13.339129  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f"
	I0203 11:16:13.421892  299665 logs.go:123] Gathering logs for coredns [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d] ...
	I0203 11:16:13.421928  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d"
	I0203 11:16:13.468083  299665 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:16:13.468112  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 11:16:13.569174  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:13.647027  299665 logs.go:123] Gathering logs for etcd [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae] ...
	I0203 11:16:13.647221  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae"
	I0203 11:16:13.750952  299665 logs.go:123] Gathering logs for kube-controller-manager [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806] ...
	I0203 11:16:13.750985  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806"
	I0203 11:16:13.842118  299665 logs.go:123] Gathering logs for kubelet ...
	I0203 11:16:13.842155  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0203 11:16:13.906381  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: W0203 11:14:08.879934    1510 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.906650  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:08 addons-595492 kubelet[1510]: E0203 11:14:08.879992    1510 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:13.931687  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.818138    1510 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-595492" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.931994  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:13.932212  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.932464  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:13.932684  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.932921  299665 logs.go:138] Found kubelet problem: Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:16:13.971642  299665 logs.go:123] Gathering logs for dmesg ...
	I0203 11:16:13.971677  299665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:16:13.992342  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:16:13.992375  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0203 11:16:13.992446  299665 out.go:270] X Problems detected in kubelet:
	W0203 11:16:13.992466  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.818185    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:13.992478  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852011    1510 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.992516  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852058    1510 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	W0203 11:16:13.992531  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: W0203 11:14:55.852414    1510 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-595492" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-595492' and this object
	W0203 11:16:13.992538  299665 out.go:270]   Feb 03 11:14:55 addons-595492 kubelet[1510]: E0203 11:14:55.852447    1510 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-595492\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-595492' and this object" logger="UnhandledError"
	I0203 11:16:13.992546  299665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:16:13.992556  299665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:16:14.121525  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:14.568353  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:15.068720  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:15.568553  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:16.070942  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:16.568927  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:17.068476  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:17.568205  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:18.069791  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:18.568164  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:19.069122  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:19.568923  299665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 11:16:20.068733  299665 kapi.go:107] duration metric: took 2m3.505182762s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0203 11:16:20.072041  299665 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, storage-provisioner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0203 11:16:20.075843  299665 addons.go:514] duration metric: took 2m11.053111514s for enable addons: enabled=[cloud-spanner nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget storage-provisioner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0203 11:16:24.005363  299665 system_pods.go:59] 18 kube-system pods found
	I0203 11:16:24.005409  299665 system_pods.go:61] "coredns-668d6bf9bc-rkfp9" [423a84b4-af0e-4f22-98f9-f0787e1d7638] Running
	I0203 11:16:24.005416  299665 system_pods.go:61] "csi-hostpath-attacher-0" [7e33785d-df5f-4ef3-9f32-fd99e23f606c] Running
	I0203 11:16:24.005421  299665 system_pods.go:61] "csi-hostpath-resizer-0" [32699b4d-0216-4fd0-ae8c-d35fa776a6a1] Running
	I0203 11:16:24.005426  299665 system_pods.go:61] "csi-hostpathplugin-r87zg" [4a9697d4-c30f-4de7-bec2-9093860f6e66] Running
	I0203 11:16:24.005433  299665 system_pods.go:61] "etcd-addons-595492" [e74b229c-ea8d-4db9-b298-e86a72a737b4] Running
	I0203 11:16:24.005438  299665 system_pods.go:61] "kindnet-t6kg6" [c743238b-087c-4f79-9313-ce2aa57a4b40] Running
	I0203 11:16:24.005443  299665 system_pods.go:61] "kube-apiserver-addons-595492" [bada3f4b-269e-4d02-9ac5-d12ae9f1380c] Running
	I0203 11:16:24.005447  299665 system_pods.go:61] "kube-controller-manager-addons-595492" [1c771014-e230-4deb-8008-d364885e0785] Running
	I0203 11:16:24.005453  299665 system_pods.go:61] "kube-ingress-dns-minikube" [d4d864e5-2e42-4f35-a314-efba31268d56] Running
	I0203 11:16:24.005484  299665 system_pods.go:61] "kube-proxy-fc6ln" [78a74372-434c-4599-9636-ed199816ba98] Running
	I0203 11:16:24.005494  299665 system_pods.go:61] "kube-scheduler-addons-595492" [3a20dd25-09e8-4ecb-97a3-084c4e3fc266] Running
	I0203 11:16:24.005498  299665 system_pods.go:61] "metrics-server-7fbb699795-kdlwk" [6d1bb40a-f8f3-4406-b15a-d6c523995470] Running
	I0203 11:16:24.005502  299665 system_pods.go:61] "nvidia-device-plugin-daemonset-bbdcr" [c4cf8c1e-eadc-4443-b557-2b1d3b1eaee1] Running
	I0203 11:16:24.005509  299665 system_pods.go:61] "registry-6c88467877-pj84h" [572705f6-2f5d-43aa-b391-4385619b7743] Running
	I0203 11:16:24.005520  299665 system_pods.go:61] "registry-proxy-htn7v" [2679ee41-9f70-4f5c-a26d-6b342c3151e6] Running
	I0203 11:16:24.005524  299665 system_pods.go:61] "snapshot-controller-68b874b76f-n2fjk" [0ee7287c-7e6a-49df-906c-51427f99db5b] Running
	I0203 11:16:24.005528  299665 system_pods.go:61] "snapshot-controller-68b874b76f-wrdng" [706aef0c-9522-4152-80aa-1c463583a054] Running
	I0203 11:16:24.005532  299665 system_pods.go:61] "storage-provisioner" [bb6b239e-d4b1-4f08-ba6a-2ad508dd8e9a] Running
	I0203 11:16:24.005538  299665 system_pods.go:74] duration metric: took 11.395104533s to wait for pod list to return data ...
	I0203 11:16:24.005565  299665 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:16:24.009399  299665 default_sa.go:45] found service account: "default"
	I0203 11:16:24.009423  299665 default_sa.go:55] duration metric: took 3.832528ms for default service account to be created ...
	I0203 11:16:24.009434  299665 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:16:24.023023  299665 system_pods.go:86] 18 kube-system pods found
	I0203 11:16:24.023067  299665 system_pods.go:89] "coredns-668d6bf9bc-rkfp9" [423a84b4-af0e-4f22-98f9-f0787e1d7638] Running
	I0203 11:16:24.023076  299665 system_pods.go:89] "csi-hostpath-attacher-0" [7e33785d-df5f-4ef3-9f32-fd99e23f606c] Running
	I0203 11:16:24.023083  299665 system_pods.go:89] "csi-hostpath-resizer-0" [32699b4d-0216-4fd0-ae8c-d35fa776a6a1] Running
	I0203 11:16:24.023088  299665 system_pods.go:89] "csi-hostpathplugin-r87zg" [4a9697d4-c30f-4de7-bec2-9093860f6e66] Running
	I0203 11:16:24.023093  299665 system_pods.go:89] "etcd-addons-595492" [e74b229c-ea8d-4db9-b298-e86a72a737b4] Running
	I0203 11:16:24.023098  299665 system_pods.go:89] "kindnet-t6kg6" [c743238b-087c-4f79-9313-ce2aa57a4b40] Running
	I0203 11:16:24.023103  299665 system_pods.go:89] "kube-apiserver-addons-595492" [bada3f4b-269e-4d02-9ac5-d12ae9f1380c] Running
	I0203 11:16:24.023108  299665 system_pods.go:89] "kube-controller-manager-addons-595492" [1c771014-e230-4deb-8008-d364885e0785] Running
	I0203 11:16:24.023114  299665 system_pods.go:89] "kube-ingress-dns-minikube" [d4d864e5-2e42-4f35-a314-efba31268d56] Running
	I0203 11:16:24.023125  299665 system_pods.go:89] "kube-proxy-fc6ln" [78a74372-434c-4599-9636-ed199816ba98] Running
	I0203 11:16:24.023130  299665 system_pods.go:89] "kube-scheduler-addons-595492" [3a20dd25-09e8-4ecb-97a3-084c4e3fc266] Running
	I0203 11:16:24.023135  299665 system_pods.go:89] "metrics-server-7fbb699795-kdlwk" [6d1bb40a-f8f3-4406-b15a-d6c523995470] Running
	I0203 11:16:24.023143  299665 system_pods.go:89] "nvidia-device-plugin-daemonset-bbdcr" [c4cf8c1e-eadc-4443-b557-2b1d3b1eaee1] Running
	I0203 11:16:24.023150  299665 system_pods.go:89] "registry-6c88467877-pj84h" [572705f6-2f5d-43aa-b391-4385619b7743] Running
	I0203 11:16:24.023158  299665 system_pods.go:89] "registry-proxy-htn7v" [2679ee41-9f70-4f5c-a26d-6b342c3151e6] Running
	I0203 11:16:24.023163  299665 system_pods.go:89] "snapshot-controller-68b874b76f-n2fjk" [0ee7287c-7e6a-49df-906c-51427f99db5b] Running
	I0203 11:16:24.023167  299665 system_pods.go:89] "snapshot-controller-68b874b76f-wrdng" [706aef0c-9522-4152-80aa-1c463583a054] Running
	I0203 11:16:24.023173  299665 system_pods.go:89] "storage-provisioner" [bb6b239e-d4b1-4f08-ba6a-2ad508dd8e9a] Running
	I0203 11:16:24.023184  299665 system_pods.go:126] duration metric: took 13.74351ms to wait for k8s-apps to be running ...
	I0203 11:16:24.023192  299665 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:16:24.023255  299665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:16:24.036863  299665 system_svc.go:56] duration metric: took 13.660113ms WaitForService to wait for kubelet
	I0203 11:16:24.036943  299665 kubeadm.go:582] duration metric: took 2m15.014408249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:16:24.036974  299665 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:16:24.041594  299665 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0203 11:16:24.041701  299665 node_conditions.go:123] node cpu capacity is 2
	I0203 11:16:24.041722  299665 node_conditions.go:105] duration metric: took 4.740718ms to run NodePressure ...
	I0203 11:16:24.041735  299665 start.go:241] waiting for startup goroutines ...
	I0203 11:16:24.041744  299665 start.go:246] waiting for cluster config update ...
	I0203 11:16:24.041764  299665 start.go:255] writing updated cluster config ...
	I0203 11:16:24.042152  299665 ssh_runner.go:195] Run: rm -f paused
	I0203 11:16:24.454030  299665 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:16:24.457268  299665 out.go:177] * Done! kubectl is now configured to use "addons-595492" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 03 11:19:09 addons-595492 crio[975]: time="2025-02-03 11:19:09.601478673Z" level=info msg="Removed container 0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0: default/cloud-spanner-emulator-5d76cffbc-4vw9k/cloud-spanner-emulator" id=afaec4a8-6f16-46ea-ac47-4fb2f5234ce2 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.950905457Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-k8ctb/POD" id=43534809-52d3-47bf-a07f-5849cf64c7b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.950968850Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.980812862Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-k8ctb Namespace:default ID:43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5 UID:ae878374-b537-43d8-8e9a-810a6ecdd7d0 NetNS:/var/run/netns/e5c70ea1-1ad3-400b-aa37-930b52070594 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.980874319Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-k8ctb to CNI network \"kindnet\" (type=ptp)"
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.989681028Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-k8ctb Namespace:default ID:43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5 UID:ae878374-b537-43d8-8e9a-810a6ecdd7d0 NetNS:/var/run/netns/e5c70ea1-1ad3-400b-aa37-930b52070594 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.989824240Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-k8ctb for CNI network kindnet (type=ptp)"
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.995033094Z" level=info msg="Ran pod sandbox 43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5 with infra container: default/hello-world-app-7d9564db4-k8ctb/POD" id=43534809-52d3-47bf-a07f-5849cf64c7b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.995984361Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=56ef9493-3bdf-4ab7-8e19-e2737605eec9 name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.996195134Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=56ef9493-3bdf-4ab7-8e19-e2737605eec9 name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:41 addons-595492 crio[975]: time="2025-02-03 11:19:41.996964789Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=73c9140a-556e-4912-95d3-5fc1b92325b1 name=/runtime.v1.ImageService/PullImage
	Feb 03 11:19:42 addons-595492 crio[975]: time="2025-02-03 11:19:41.999514262Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 03 11:19:42 addons-595492 crio[975]: time="2025-02-03 11:19:42.272853665Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.021814094Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=73c9140a-556e-4912-95d3-5fc1b92325b1 name=/runtime.v1.ImageService/PullImage
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.022815174Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6a7efb72-1823-43ff-8707-0248172eea95 name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.023563348Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a7efb72-1823-43ff-8707-0248172eea95 name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.024639398Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=baabc712-f898-46dd-80de-d129a2c6c30d name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.025370169Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=baabc712-f898-46dd-80de-d129a2c6c30d name=/runtime.v1.ImageService/ImageStatus
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.026270671Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-k8ctb/hello-world-app" id=808c1484-1222-4494-8fe5-11b6e19880a3 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.026375647Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.052549230Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/446dee58cef1a30f57c106538914e98c45d27f97239b84ae3c9d5a7cdbb0c416/merged/etc/passwd: no such file or directory"
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.052765410Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/446dee58cef1a30f57c106538914e98c45d27f97239b84ae3c9d5a7cdbb0c416/merged/etc/group: no such file or directory"
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.111429340Z" level=info msg="Created container 1ca45f396f91b1e0729f82dc2549a94c728bde0c73c69ed08860ddd1bf5afee9: default/hello-world-app-7d9564db4-k8ctb/hello-world-app" id=808c1484-1222-4494-8fe5-11b6e19880a3 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.112208365Z" level=info msg="Starting container: 1ca45f396f91b1e0729f82dc2549a94c728bde0c73c69ed08860ddd1bf5afee9" id=c3a65453-7bd0-4668-a9d8-da9de42ac003 name=/runtime.v1.RuntimeService/StartContainer
	Feb 03 11:19:43 addons-595492 crio[975]: time="2025-02-03 11:19:43.118204442Z" level=info msg="Started container" PID=8613 containerID=1ca45f396f91b1e0729f82dc2549a94c728bde0c73c69ed08860ddd1bf5afee9 description=default/hello-world-app-7d9564db4-k8ctb/hello-world-app id=c3a65453-7bd0-4668-a9d8-da9de42ac003 name=/runtime.v1.RuntimeService/StartContainer sandboxID=43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	1ca45f396f91b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   43441f89c7225       hello-world-app-7d9564db4-k8ctb
	d1b10d662e485       docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10                              2 minutes ago            Running             nginx                     0                   937b731de4873       nginx
	d6b9d56961ab0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   b8a9bd81b5207       busybox
	c6f08d67cac3d       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   2a4e8959bc4e9       ingress-nginx-controller-56d7c84fd4-htcjq
	2ddd04002f4f4       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             3 minutes ago            Exited              patch                     1                   eca251019c18e       ingress-nginx-admission-patch-hxktf
	d9b8e4de99987       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   3 minutes ago            Exited              create                    0                   f3e6bf2f65055       ingress-nginx-admission-create-r47h4
	34e0f7edd7c0a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   cf80f35a0074e       kube-ingress-dns-minikube
	1ed441b5661df       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   96578e4b3d3f5       coredns-668d6bf9bc-rkfp9
	e55f70a7dafdd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   1e39742cb49bc       storage-provisioner
	f4dc37affab7c       docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be                           5 minutes ago            Running             kindnet-cni               0                   29e27ab8ac78f       kindnet-t6kg6
	04d085fe2f83c       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             5 minutes ago            Running             kube-proxy                0                   2731ece908299       kube-proxy-fc6ln
	7dec44de52803       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             5 minutes ago            Running             kube-apiserver            0                   f6c9f337ccfaa       kube-apiserver-addons-595492
	3ecf1d86950b2       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   9478e56a99135       etcd-addons-595492
	725dbadb9b09e       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             5 minutes ago            Running             kube-controller-manager   0                   86406c242f3d4       kube-controller-manager-addons-595492
	862000ea95047       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             5 minutes ago            Running             kube-scheduler            0                   a474aad2e592b       kube-scheduler-addons-595492
	
	
	==> coredns [1ed441b5661df8255ab5f1d4b5e90e4d35f14f353f9794dab2cfe26a8d720a1d] <==
	[INFO] 10.244.0.12:48102 - 24912 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001782501s
	[INFO] 10.244.0.12:48102 - 13581 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000163413s
	[INFO] 10.244.0.12:48102 - 62495 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000104615s
	[INFO] 10.244.0.12:45941 - 37862 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175023s
	[INFO] 10.244.0.12:45941 - 37542 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100947s
	[INFO] 10.244.0.12:50040 - 34580 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083003s
	[INFO] 10.244.0.12:50040 - 34399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089674s
	[INFO] 10.244.0.12:58947 - 25616 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079548s
	[INFO] 10.244.0.12:58947 - 25460 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078367s
	[INFO] 10.244.0.12:60777 - 62798 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001187294s
	[INFO] 10.244.0.12:60777 - 62617 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00124249s
	[INFO] 10.244.0.12:51102 - 34151 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102884s
	[INFO] 10.244.0.12:51102 - 34020 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074593s
	[INFO] 10.244.0.19:50783 - 65509 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180217s
	[INFO] 10.244.0.19:54103 - 42123 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162002s
	[INFO] 10.244.0.19:33192 - 55623 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0003061s
	[INFO] 10.244.0.19:53162 - 64987 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000398735s
	[INFO] 10.244.0.19:58668 - 63177 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161804s
	[INFO] 10.244.0.19:33354 - 56708 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132561s
	[INFO] 10.244.0.19:50633 - 41680 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002362027s
	[INFO] 10.244.0.19:47385 - 55850 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002673533s
	[INFO] 10.244.0.19:47910 - 37513 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001913872s
	[INFO] 10.244.0.19:36279 - 62809 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00199813s
	[INFO] 10.244.0.24:49433 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000351572s
	[INFO] 10.244.0.24:56302 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000303195s
	
	
	==> describe nodes <==
	Name:               addons-595492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-595492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=addons-595492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T11_14_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-595492
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:14:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-595492
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:19:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:18:09 +0000   Mon, 03 Feb 2025 11:13:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:18:09 +0000   Mon, 03 Feb 2025 11:13:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:18:09 +0000   Mon, 03 Feb 2025 11:13:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:18:09 +0000   Mon, 03 Feb 2025 11:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-595492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 17a67e63f3ee46e7bade946dc17700c3
	  System UUID:                1f2e9b1a-855a-4280-8d74-416ac7d366c8
	  Boot ID:                    5d040379-6a1c-4428-8653-680b35698cc0
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  default                     hello-world-app-7d9564db4-k8ctb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-htcjq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m27s
	  kube-system                 coredns-668d6bf9bc-rkfp9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m35s
	  kube-system                 etcd-addons-595492                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m39s
	  kube-system                 kindnet-t6kg6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m35s
	  kube-system                 kube-apiserver-addons-595492                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-addons-595492        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-fc6ln                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-scheduler-addons-595492                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m27s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node addons-595492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node addons-595492 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m47s (x8 over 5m47s)  kubelet          Node addons-595492 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m39s                  kubelet          Node addons-595492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m39s                  kubelet          Node addons-595492 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m39s                  kubelet          Node addons-595492 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m36s                  node-controller  Node addons-595492 event: Registered Node addons-595492 in Controller
	  Normal   NodeReady                4m48s                  kubelet          Node addons-595492 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb 3 09:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016791] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.502359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033954] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.782772] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.606538] kauditd_printk_skb: 36 callbacks suppressed
	[Feb 3 10:43] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3ecf1d86950b2230442a129e0937cd79a9254e15600b5203fd4313ea2ba4b0ae] <==
	{"level":"warn","ts":"2025-02-03T11:14:12.819137Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:12.005773Z","time spent":"813.292253ms","remote":"127.0.0.1:50540","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4094,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:373 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4045 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-02-03T11:14:12.821031Z","caller":"traceutil/trace.go:171","msg":"trace[855046352] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:406; }","duration":"600.82451ms","start":"2025-02-03T11:14:12.220194Z","end":"2025-02-03T11:14:12.821018Z","steps":["trace[855046352] 'read index received'  (duration: 296.698345ms)","trace[855046352] 'applied index is now lower than readState.Index'  (duration: 304.124894ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-03T11:14:12.839022Z","caller":"traceutil/trace.go:171","msg":"trace[1892376563] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"747.584123ms","start":"2025-02-03T11:14:12.091414Z","end":"2025-02-03T11:14:12.838998Z","steps":["trace[1892376563] 'process raft request'  (duration: 702.504475ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T11:14:12.839634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:12.091392Z","time spent":"748.193515ms","remote":"127.0.0.1:50290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":211,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" mod_revision:320 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" value_size:141 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" > >"}
	{"level":"warn","ts":"2025-02-03T11:14:12.839238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"861.906287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-02-03T11:14:12.839888Z","caller":"traceutil/trace.go:171","msg":"trace[1594913336] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:399; }","duration":"862.567445ms","start":"2025-02-03T11:14:11.977310Z","end":"2025-02-03T11:14:12.839877Z","steps":["trace[1594913336] 'agreement among raft nodes before linearized reading'  (duration: 861.865614ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T11:14:12.839945Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:11.977254Z","time spent":"862.679871ms","remote":"127.0.0.1:50212","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T11:14:12.887841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"966.892782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:14:12.887990Z","caller":"traceutil/trace.go:171","msg":"trace[51420541] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:399; }","duration":"967.058779ms","start":"2025-02-03T11:14:11.920916Z","end":"2025-02-03T11:14:12.887974Z","steps":["trace[51420541] 'agreement among raft nodes before linearized reading'  (duration: 918.562833ms)","trace[51420541] 'range keys from in-memory index tree'  (duration: 48.316738ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.888057Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:11.920844Z","time spent":"967.191685ms","remote":"127.0.0.1:50556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T11:14:12.888290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.470126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:14:12.888477Z","caller":"traceutil/trace.go:171","msg":"trace[1499316065] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:399; }","duration":"372.559741ms","start":"2025-02-03T11:14:12.515795Z","end":"2025-02-03T11:14:12.888355Z","steps":["trace[1499316065] 'agreement among raft nodes before linearized reading'  (duration: 324.021432ms)","trace[1499316065] 'range keys from in-memory index tree'  (duration: 48.428771ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.888632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.00859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:14:12.904219Z","caller":"traceutil/trace.go:171","msg":"trace[413357223] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:399; }","duration":"187.585573ms","start":"2025-02-03T11:14:12.716612Z","end":"2025-02-03T11:14:12.904197Z","steps":["trace[413357223] 'agreement among raft nodes before linearized reading'  (duration: 137.073651ms)","trace[413357223] 'range keys from in-memory index tree'  (duration: 34.915337ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.888695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.19446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:14:12.904513Z","caller":"traceutil/trace.go:171","msg":"trace[555020610] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:399; }","duration":"188.005099ms","start":"2025-02-03T11:14:12.716495Z","end":"2025-02-03T11:14:12.904500Z","steps":["trace[555020610] 'agreement among raft nodes before linearized reading'  (duration: 137.201535ms)","trace[555020610] 'range keys from in-memory index tree'  (duration: 34.984342ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.888713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.738818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:14:12.904731Z","caller":"traceutil/trace.go:171","msg":"trace[8130302] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:399; }","duration":"330.75273ms","start":"2025-02-03T11:14:12.573968Z","end":"2025-02-03T11:14:12.904721Z","steps":["trace[8130302] 'agreement among raft nodes before linearized reading'  (duration: 279.735522ms)","trace[8130302] 'range keys from in-memory index tree'  (duration: 35.000621ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.904805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:12.518015Z","time spent":"386.776418ms","remote":"127.0.0.1:50476","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":29,"request content":"key:\"/registry/storageclasses/standard\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T11:14:12.888761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.230983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-addons-595492\" limit:1 ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2025-02-03T11:14:12.905043Z","caller":"traceutil/trace.go:171","msg":"trace[602115623] range","detail":"{range_begin:/registry/pods/kube-system/etcd-addons-595492; range_end:; response_count:1; response_revision:399; }","duration":"387.508362ms","start":"2025-02-03T11:14:12.517525Z","end":"2025-02-03T11:14:12.905034Z","steps":["trace[602115623] 'agreement among raft nodes before linearized reading'  (duration: 336.183594ms)","trace[602115623] 'range keys from in-memory index tree'  (duration: 35.014742ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.905133Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:12.517468Z","time spent":"387.653099ms","remote":"127.0.0.1:50278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":5774,"request content":"key:\"/registry/pods/kube-system/etcd-addons-595492\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T11:14:12.891058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.40488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-595492\" limit:1 ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2025-02-03T11:14:12.911493Z","caller":"traceutil/trace.go:171","msg":"trace[1452176707] range","detail":"{range_begin:/registry/minions/addons-595492; range_end:; response_count:1; response_revision:399; }","duration":"194.818308ms","start":"2025-02-03T11:14:12.716635Z","end":"2025-02-03T11:14:12.911454Z","steps":["trace[1452176707] 'agreement among raft nodes before linearized reading'  (duration: 136.862197ms)","trace[1452176707] 'range keys from in-memory index tree'  (duration: 36.642559ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T11:14:12.912149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:14:12.515725Z","time spent":"388.337019ms","remote":"127.0.0.1:50290","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 "}
	
	
	==> kernel <==
	 11:19:43 up  2:01,  0 users,  load average: 1.51, 1.95, 2.50
	Linux addons-595492 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f4dc37affab7cf7c4117992eed96e4c687b5acc667d8f186229409951f314079] <==
	I0203 11:17:35.345963       1 main.go:301] handling current node
	I0203 11:17:45.345840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:17:45.345992       1 main.go:301] handling current node
	I0203 11:17:55.346078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:17:55.346199       1 main.go:301] handling current node
	I0203 11:18:05.345771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:05.345889       1 main.go:301] handling current node
	I0203 11:18:15.346220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:15.346254       1 main.go:301] handling current node
	I0203 11:18:25.353479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:25.353512       1 main.go:301] handling current node
	I0203 11:18:35.347717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:35.347844       1 main.go:301] handling current node
	I0203 11:18:45.345662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:45.345697       1 main.go:301] handling current node
	I0203 11:18:55.346682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:18:55.346717       1 main.go:301] handling current node
	I0203 11:19:05.345694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:19:05.345813       1 main.go:301] handling current node
	I0203 11:19:15.346199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:19:15.346232       1 main.go:301] handling current node
	I0203 11:19:25.349864       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:19:25.349899       1 main.go:301] handling current node
	I0203 11:19:35.346138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0203 11:19:35.346225       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7dec44de52803723ec7f129677524487ac8274a5171b2eba238f7be35a37021f] <==
	I0203 11:16:45.533546       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.104.219"}
	I0203 11:17:16.572998       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0203 11:17:17.606404       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0203 11:17:21.093451       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0203 11:17:22.219440       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0203 11:17:22.645074       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.194.101"}
	I0203 11:17:43.552193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 11:17:43.552418       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 11:17:43.590839       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 11:17:43.590958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 11:17:43.684312       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 11:17:43.684603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 11:17:43.687322       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 11:17:43.688277       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 11:17:43.722910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 11:17:43.722956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0203 11:17:44.689567       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0203 11:17:44.727620       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0203 11:17:44.745710       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0203 11:17:49.688896       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0203 11:18:01.962818       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0203 11:18:01.973382       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0203 11:18:01.984037       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0203 11:18:16.984844       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0203 11:19:41.866624       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.18.197"}
	
	
	==> kube-controller-manager [725dbadb9b09ecd52e1685df7aa0f94b1bfef013dffcd1fc59a7223e76143806] <==
	W0203 11:18:57.753067       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 11:18:57.753106       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0203 11:19:01.015186       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0203 11:19:04.495103       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 11:19:04.496190       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0203 11:19:04.497237       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 11:19:04.497274       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0203 11:19:09.129944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5d76cffbc" duration="4.529µs"
	W0203 11:19:25.245670       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 11:19:25.246745       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0203 11:19:25.247728       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 11:19:25.247763       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 11:19:35.947186       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 11:19:35.948242       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0203 11:19:35.949426       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 11:19:35.949519       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 11:19:40.435128       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 11:19:40.436082       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0203 11:19:40.437080       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 11:19:40.437118       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0203 11:19:41.652095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="55.023949ms"
	I0203 11:19:41.672975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="20.83092ms"
	I0203 11:19:41.673462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="49.854µs"
	I0203 11:19:43.672819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.86205ms"
	I0203 11:19:43.673003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="74.79µs"
	
	
	==> kube-proxy [04d085fe2f83c297821bd05520a68ee25db6ae7d525fdd88e028cc6cf32c546e] <==
	I0203 11:14:15.449234       1 server_linux.go:66] "Using iptables proxy"
	I0203 11:14:15.763172       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0203 11:14:15.763241       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 11:14:15.813072       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0203 11:14:15.813208       1 server_linux.go:170] "Using iptables Proxier"
	I0203 11:14:15.834682       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 11:14:15.842128       1 server.go:497] "Version info" version="v1.32.1"
	I0203 11:14:15.842240       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:14:15.849241       1 config.go:105] "Starting endpoint slice config controller"
	I0203 11:14:15.851423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 11:14:15.850991       1 config.go:329] "Starting node config controller"
	I0203 11:14:15.851570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 11:14:15.849171       1 config.go:199] "Starting service config controller"
	I0203 11:14:15.851652       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 11:14:15.952171       1 shared_informer.go:320] Caches are synced for service config
	I0203 11:14:15.952275       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 11:14:15.953037       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [862000ea95047947edeac661b80335b6fd09cff3d5dd5e0da44bc7dbf72985bb] <==
	I0203 11:14:01.912356       1 serving.go:386] Generated self-signed cert in-memory
	W0203 11:14:02.656102       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 11:14:02.656222       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 11:14:02.656256       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 11:14:02.656306       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 11:14:02.679049       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 11:14:02.679672       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:14:02.682030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 11:14:02.682280       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 11:14:02.682327       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 11:14:02.682448       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0203 11:14:02.692118       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 11:14:02.700325       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0203 11:14:02.694442       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0203 11:14:02.701308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 11:14:04.184018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 03 11:19:04 addons-595492 kubelet[1510]: I0203 11:19:04.607283    1510 scope.go:117] "RemoveContainer" containerID="e5cbb2298727d04c2cc10ac0c07f73ea0e645a8cf4e6ef0dee88d66edae7691b"
	Feb 03 11:19:05 addons-595492 kubelet[1510]: E0203 11:19:05.026327    1510 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9fc9fe7c6e54a636c1dab883389c30febdb70faab632ee499bc5928bc6ff8b4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9fc9fe7c6e54a636c1dab883389c30febdb70faab632ee499bc5928bc6ff8b4b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 03 11:19:08 addons-595492 kubelet[1510]: I0203 11:19:08.269924    1510 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5d76cffbc-4vw9k" secret="" err="secret \"gcp-auth\" not found"
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.362740    1510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j64qh\" (UniqueName: \"kubernetes.io/projected/8edbf1bd-fa6d-48fb-a0a6-59a3234bb516-kube-api-access-j64qh\") pod \"8edbf1bd-fa6d-48fb-a0a6-59a3234bb516\" (UID: \"8edbf1bd-fa6d-48fb-a0a6-59a3234bb516\") "
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.364664    1510 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8edbf1bd-fa6d-48fb-a0a6-59a3234bb516-kube-api-access-j64qh" (OuterVolumeSpecName: "kube-api-access-j64qh") pod "8edbf1bd-fa6d-48fb-a0a6-59a3234bb516" (UID: "8edbf1bd-fa6d-48fb-a0a6-59a3234bb516"). InnerVolumeSpecName "kube-api-access-j64qh". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.464014    1510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j64qh\" (UniqueName: \"kubernetes.io/projected/8edbf1bd-fa6d-48fb-a0a6-59a3234bb516-kube-api-access-j64qh\") on node \"addons-595492\" DevicePath \"\""
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.576746    1510 scope.go:117] "RemoveContainer" containerID="0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0"
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.601731    1510 scope.go:117] "RemoveContainer" containerID="0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0"
	Feb 03 11:19:09 addons-595492 kubelet[1510]: E0203 11:19:09.602140    1510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0\": container with ID starting with 0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0 not found: ID does not exist" containerID="0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0"
	Feb 03 11:19:09 addons-595492 kubelet[1510]: I0203 11:19:09.602191    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0"} err="failed to get container status \"0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0\": rpc error: code = NotFound desc = could not find container \"0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0\": container with ID starting with 0fa6ca38863b4b027ab3dff4929c5c15923d643f5d4188a380dab3a334ce66c0 not found: ID does not exist"
	Feb 03 11:19:10 addons-595492 kubelet[1510]: I0203 11:19:10.271067    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8edbf1bd-fa6d-48fb-a0a6-59a3234bb516" path="/var/lib/kubelet/pods/8edbf1bd-fa6d-48fb-a0a6-59a3234bb516/volumes"
	Feb 03 11:19:14 addons-595492 kubelet[1510]: E0203 11:19:14.495042    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581554494807616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:14 addons-595492 kubelet[1510]: E0203 11:19:14.495512    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581554494807616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:20 addons-595492 kubelet[1510]: I0203 11:19:20.270278    1510 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 03 11:19:24 addons-595492 kubelet[1510]: E0203 11:19:24.498759    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581564498370812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:24 addons-595492 kubelet[1510]: E0203 11:19:24.498802    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581564498370812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:34 addons-595492 kubelet[1510]: E0203 11:19:34.501239    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581574501071144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:34 addons-595492 kubelet[1510]: E0203 11:19:34.501277    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738581574501071144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.649017    1510 memory_manager.go:355] "RemoveStaleState removing state" podUID="c4cf8c1e-eadc-4443-b557-2b1d3b1eaee1" containerName="nvidia-device-plugin-ctr"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.649060    1510 memory_manager.go:355] "RemoveStaleState removing state" podUID="62e5198f-409d-4032-b81f-22d3bb835a94" containerName="helper-pod"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.649069    1510 memory_manager.go:355] "RemoveStaleState removing state" podUID="4220c602-7d86-4167-82e3-df155563c178" containerName="yakd"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.649077    1510 memory_manager.go:355] "RemoveStaleState removing state" podUID="8edbf1bd-fa6d-48fb-a0a6-59a3234bb516" containerName="cloud-spanner-emulator"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.649084    1510 memory_manager.go:355] "RemoveStaleState removing state" podUID="af12a66e-5a4f-4a82-baf6-f0842b71a166" containerName="local-path-provisioner"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: I0203 11:19:41.691524    1510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5fq\" (UniqueName: \"kubernetes.io/projected/ae878374-b537-43d8-8e9a-810a6ecdd7d0-kube-api-access-xn5fq\") pod \"hello-world-app-7d9564db4-k8ctb\" (UID: \"ae878374-b537-43d8-8e9a-810a6ecdd7d0\") " pod="default/hello-world-app-7d9564db4-k8ctb"
	Feb 03 11:19:41 addons-595492 kubelet[1510]: W0203 11:19:41.993825    1510 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/48839ec78b4fd466fb5df5b16785a4e877a865004bb3b243b98813c772e6b3dd/crio-43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5 WatchSource:0}: Error finding container 43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5: Status 404 returned error can't find the container with id 43441f89c7225912fdc6fa4bb7303dad2f8fe3416d76e6295be029a0777d8ac5
	
	
	==> storage-provisioner [e55f70a7dafdd57930d038ac8f18f20c9da2c11dfb43a275b13d8070bc761fe5] <==
	I0203 11:14:57.021579       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 11:14:57.104288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 11:14:57.104422       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 11:14:57.114775       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 11:14:57.114969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea39ab6a-0ee8-4e67-8ae5-ccacdc296f7a", APIVersion:"v1", ResourceVersion:"932", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-595492_cbc490c0-1b59-44ae-a8c4-672549b22444 became leader
	I0203 11:14:57.120268       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-595492_cbc490c0-1b59-44ae-a8c4-672549b22444!
	I0203 11:14:57.221337       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-595492_cbc490c0-1b59-44ae-a8c4-672549b22444!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-595492 -n addons-595492
helpers_test.go:261: (dbg) Run:  kubectl --context addons-595492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-r47h4 ingress-nginx-admission-patch-hxktf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-595492 describe pod ingress-nginx-admission-create-r47h4 ingress-nginx-admission-patch-hxktf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-595492 describe pod ingress-nginx-admission-create-r47h4 ingress-nginx-admission-patch-hxktf: exit status 1 (90.797191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r47h4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hxktf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-595492 describe pod ingress-nginx-admission-create-r47h4 ingress-nginx-admission-patch-hxktf: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable ingress-dns --alsologtostderr -v=1: (1.705645394s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable ingress --alsologtostderr -v=1: (7.768219717s)
--- FAIL: TestAddons/parallel/Ingress (152.64s)

                                                
                                    

Test pass (298/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 4.61
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.2
18 TestDownloadOnly/v1.32.1/DeleteAll 0.37
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.25
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 190.35
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 10.96
35 TestAddons/parallel/Registry 18.66
37 TestAddons/parallel/InspektorGadget 11.8
38 TestAddons/parallel/MetricsServer 6.92
40 TestAddons/parallel/CSI 49.23
41 TestAddons/parallel/Headlamp 17.06
42 TestAddons/parallel/CloudSpanner 6.56
43 TestAddons/parallel/LocalPath 53.6
44 TestAddons/parallel/NvidiaDevicePlugin 6.52
45 TestAddons/parallel/Yakd 11.75
47 TestAddons/StoppedEnableDisable 12.17
48 TestCertOptions 38.15
49 TestCertExpiration 235.36
51 TestForceSystemdFlag 41.11
52 TestForceSystemdEnv 45.97
58 TestErrorSpam/setup 30.82
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.35
61 TestErrorSpam/pause 1.7
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 1.54
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 47.47
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 25.96
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.46
75 TestFunctional/serial/CacheCmd/cache/add_local 1.42
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.16
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 39.46
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.71
86 TestFunctional/serial/LogsFileCmd 2.17
87 TestFunctional/serial/InvalidService 4.71
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 15.17
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.05
97 TestFunctional/parallel/ServiceCmdConnect 10.71
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 24.75
101 TestFunctional/parallel/SSHCmd 0.76
102 TestFunctional/parallel/CpCmd 2.01
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.25
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.34
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 9.08
130 TestFunctional/parallel/ServiceCmd/List 0.52
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.37
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 1.94
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.34
137 TestFunctional/parallel/Version/short 0.11
138 TestFunctional/parallel/Version/components 1.32
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
144 TestFunctional/parallel/ImageCommands/Setup 0.76
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.43
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.69
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 181.38
163 TestMultiControlPlane/serial/DeployApp 8.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.61
165 TestMultiControlPlane/serial/AddWorkerNode 37.96
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
168 TestMultiControlPlane/serial/CopyFile 19.47
169 TestMultiControlPlane/serial/StopSecondaryNode 12.68
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 22.88
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 201.47
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 35.75
177 TestMultiControlPlane/serial/RestartCluster 113.57
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 74.6
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
184 TestJSONOutput/start/Command 49.07
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.76
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.64
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.78
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 40.32
210 TestKicCustomNetwork/use_default_bridge_network 33.48
211 TestKicExistingNetwork 32.18
212 TestKicCustomSubnet 31.65
213 TestKicStaticIP 38.07
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 68.2
218 TestMountStart/serial/StartWithMountFirst 7.41
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 6.3
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.2
225 TestMountStart/serial/RestartStopped 7.89
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 77.26
230 TestMultiNode/serial/DeployApp2Nodes 6.24
231 TestMultiNode/serial/PingHostFrom2Pods 1.04
232 TestMultiNode/serial/AddNode 29.48
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 9.83
236 TestMultiNode/serial/StopNode 2.5
237 TestMultiNode/serial/StartAfterStop 9.63
238 TestMultiNode/serial/RestartKeepsNodes 83.11
239 TestMultiNode/serial/DeleteNode 5.27
240 TestMultiNode/serial/StopMultiNode 23.81
241 TestMultiNode/serial/RestartMultiNode 49.88
242 TestMultiNode/serial/ValidateNameConflict 35.16
247 TestPreload 127.92
249 TestScheduledStopUnix 103.91
252 TestInsufficientStorage 11.16
253 TestRunningBinaryUpgrade 82.81
255 TestKubernetesUpgrade 138.52
256 TestMissingContainerUpgrade 164.99
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
259 TestNoKubernetes/serial/StartWithK8s 39.63
260 TestNoKubernetes/serial/StartWithStopK8s 7.66
261 TestNoKubernetes/serial/Start 9.17
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
263 TestNoKubernetes/serial/ProfileList 1.14
264 TestNoKubernetes/serial/Stop 1.29
265 TestNoKubernetes/serial/StartNoArgs 7.32
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
267 TestStoppedBinaryUpgrade/Setup 0.6
268 TestStoppedBinaryUpgrade/Upgrade 85.64
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
278 TestPause/serial/Start 63.9
279 TestPause/serial/SecondStartNoReconfiguration 28.6
287 TestNetworkPlugins/group/false 4.27
291 TestPause/serial/Pause 0.9
292 TestPause/serial/VerifyStatus 0.43
293 TestPause/serial/Unpause 0.97
294 TestPause/serial/PauseAgain 1.08
295 TestPause/serial/DeletePaused 2.93
296 TestPause/serial/VerifyDeletedResources 0.36
298 TestStartStop/group/old-k8s-version/serial/FirstStart 185.65
300 TestStartStop/group/no-preload/serial/FirstStart 63.51
301 TestStartStop/group/old-k8s-version/serial/DeployApp 11.77
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.54
303 TestStartStop/group/old-k8s-version/serial/Stop 12.26
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/old-k8s-version/serial/SecondStart 128.06
306 TestStartStop/group/no-preload/serial/DeployApp 11.42
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.91
308 TestStartStop/group/no-preload/serial/Stop 12.21
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
310 TestStartStop/group/no-preload/serial/SecondStart 282.85
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
314 TestStartStop/group/old-k8s-version/serial/Pause 3.02
316 TestStartStop/group/embed-certs/serial/FirstStart 49.71
317 TestStartStop/group/embed-certs/serial/DeployApp 10.37
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
319 TestStartStop/group/embed-certs/serial/Stop 11.92
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/embed-certs/serial/SecondStart 265.99
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
325 TestStartStop/group/no-preload/serial/Pause 3.1
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.95
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.05
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
336 TestStartStop/group/embed-certs/serial/Pause 3.12
338 TestStartStop/group/newest-cni/serial/FirstStart 35.14
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
341 TestStartStop/group/newest-cni/serial/Stop 1.29
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/newest-cni/serial/SecondStart 16.85
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
347 TestStartStop/group/newest-cni/serial/Pause 3.14
348 TestNetworkPlugins/group/auto/Start 50.6
349 TestNetworkPlugins/group/auto/KubeletFlags 0.3
350 TestNetworkPlugins/group/auto/NetCatPod 11.3
351 TestNetworkPlugins/group/auto/DNS 0.19
352 TestNetworkPlugins/group/auto/Localhost 0.16
353 TestNetworkPlugins/group/auto/HairPin 0.15
354 TestNetworkPlugins/group/kindnet/Start 48.67
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.41
362 TestNetworkPlugins/group/kindnet/DNS 0.22
363 TestNetworkPlugins/group/kindnet/Localhost 0.19
364 TestNetworkPlugins/group/kindnet/HairPin 0.21
365 TestNetworkPlugins/group/calico/Start 79.38
366 TestNetworkPlugins/group/custom-flannel/Start 63.74
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
370 TestNetworkPlugins/group/calico/KubeletFlags 0.43
371 TestNetworkPlugins/group/calico/NetCatPod 13.46
372 TestNetworkPlugins/group/custom-flannel/DNS 0.19
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
375 TestNetworkPlugins/group/calico/DNS 0.19
376 TestNetworkPlugins/group/calico/Localhost 0.2
377 TestNetworkPlugins/group/calico/HairPin 0.16
378 TestNetworkPlugins/group/enable-default-cni/Start 79.49
379 TestNetworkPlugins/group/flannel/Start 61.61
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
382 TestNetworkPlugins/group/flannel/NetCatPod 10.28
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
385 TestNetworkPlugins/group/flannel/DNS 0.22
386 TestNetworkPlugins/group/flannel/Localhost 0.18
387 TestNetworkPlugins/group/flannel/HairPin 0.21
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
391 TestNetworkPlugins/group/bridge/Start 70.84
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
393 TestNetworkPlugins/group/bridge/NetCatPod 11.27
394 TestNetworkPlugins/group/bridge/DNS 0.18
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (5.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-240719 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-240719 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.752756554s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0203 11:13:06.451402  298903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0203 11:13:06.451487  298903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-240719
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-240719: exit status 85 (95.083971ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-240719 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |          |
	|         | -p download-only-240719        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:13:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:13:00.748749  298909 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:13:00.748873  298909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:00.748884  298909 out.go:358] Setting ErrFile to fd 2...
	I0203 11:13:00.748890  298909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:00.749154  298909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	W0203 11:13:00.749291  298909 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20354-293520/.minikube/config/config.json: open /home/jenkins/minikube-integration/20354-293520/.minikube/config/config.json: no such file or directory
	I0203 11:13:00.749690  298909 out.go:352] Setting JSON to true
	I0203 11:13:00.750600  298909 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6910,"bootTime":1738574271,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:13:00.750674  298909 start.go:139] virtualization:  
	I0203 11:13:00.754891  298909 out.go:97] [download-only-240719] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0203 11:13:00.755057  298909 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball: no such file or directory
	I0203 11:13:00.755159  298909 notify.go:220] Checking for updates...
	I0203 11:13:00.758695  298909 out.go:169] MINIKUBE_LOCATION=20354
	I0203 11:13:00.761634  298909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:13:00.764624  298909 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:13:00.767466  298909 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:13:00.770382  298909 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0203 11:13:00.775982  298909 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 11:13:00.776263  298909 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:13:00.802701  298909 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:13:00.802809  298909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:00.858717  298909 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-03 11:13:00.849705947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:00.858830  298909 docker.go:318] overlay module found
	I0203 11:13:00.861799  298909 out.go:97] Using the docker driver based on user configuration
	I0203 11:13:00.861826  298909 start.go:297] selected driver: docker
	I0203 11:13:00.861833  298909 start.go:901] validating driver "docker" against <nil>
	I0203 11:13:00.861944  298909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:00.914068  298909 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-03 11:13:00.905317503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:00.914315  298909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:13:00.914606  298909 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0203 11:13:00.914764  298909 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 11:13:00.917975  298909 out.go:169] Using Docker driver with root privileges
	I0203 11:13:00.920774  298909 cni.go:84] Creating CNI manager for ""
	I0203 11:13:00.920833  298909 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0203 11:13:00.920846  298909 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 11:13:00.920930  298909 start.go:340] cluster config:
	{Name:download-only-240719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-240719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:13:00.923904  298909 out.go:97] Starting "download-only-240719" primary control-plane node in "download-only-240719" cluster
	I0203 11:13:00.923922  298909 cache.go:121] Beginning downloading kic base image for docker with crio
	I0203 11:13:00.926706  298909 out.go:97] Pulling base image v0.0.46 ...
	I0203 11:13:00.926731  298909 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:13:00.926907  298909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0203 11:13:00.943270  298909 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0203 11:13:00.944229  298909 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0203 11:13:00.944333  298909 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0203 11:13:00.985451  298909 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0203 11:13:00.985487  298909 cache.go:56] Caching tarball of preloaded images
	I0203 11:13:00.986266  298909 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:13:00.989532  298909 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0203 11:13:00.989568  298909 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0203 11:13:01.070870  298909 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0203 11:13:04.542799  298909 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0203 11:13:04.542893  298909 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0203 11:13:05.699160  298909 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 11:13:05.699564  298909 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/download-only-240719/config.json ...
	I0203 11:13:05.699598  298909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/download-only-240719/config.json: {Name:mkee3b8b5811fba8ab87f71fc0885e5d1c8f4076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:13:05.699783  298909 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:13:05.700600  298909 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20354-293520/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-240719 host does not exist
	  To start a cluster, run: "minikube start -p download-only-240719"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-240719
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-606227 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-606227 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.610631738s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0203 11:13:11.522561  298903 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0203 11:13:11.522601  298903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-293520/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-606227
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-606227: exit status 85 (202.395786ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-240719 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | -p download-only-240719        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	| delete  | -p download-only-240719        | download-only-240719 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	| start   | -o=json --download-only        | download-only-606227 | jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | -p download-only-606227        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:13:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:13:06.961707  299110 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:13:06.961894  299110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:06.961907  299110 out.go:358] Setting ErrFile to fd 2...
	I0203 11:13:06.961913  299110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:13:06.962176  299110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:13:06.962622  299110 out.go:352] Setting JSON to true
	I0203 11:13:06.963495  299110 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6916,"bootTime":1738574271,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:13:06.963572  299110 start.go:139] virtualization:  
	I0203 11:13:06.967248  299110 out.go:97] [download-only-606227] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0203 11:13:06.967753  299110 notify.go:220] Checking for updates...
	I0203 11:13:06.970935  299110 out.go:169] MINIKUBE_LOCATION=20354
	I0203 11:13:06.974215  299110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:13:06.977064  299110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:13:06.979926  299110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:13:06.982782  299110 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0203 11:13:06.988398  299110 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 11:13:06.988690  299110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:13:07.020818  299110 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:13:07.020944  299110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:07.077869  299110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-03 11:13:07.068081236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:07.077982  299110 docker.go:318] overlay module found
	I0203 11:13:07.080998  299110 out.go:97] Using the docker driver based on user configuration
	I0203 11:13:07.081071  299110 start.go:297] selected driver: docker
	I0203 11:13:07.081083  299110 start.go:901] validating driver "docker" against <nil>
	I0203 11:13:07.081199  299110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:13:07.132443  299110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-03 11:13:07.123251671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:13:07.132671  299110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:13:07.132955  299110 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0203 11:13:07.133108  299110 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 11:13:07.136035  299110 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-606227 host does not exist
	  To start a cluster, run: "minikube start -p download-only-606227"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-606227
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0203 11:13:13.440541  298903 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-482869 --alsologtostderr --binary-mirror http://127.0.0.1:36113 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-482869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-482869
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-595492
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-595492: exit status 85 (67.498329ms)

                                                
                                                
-- stdout --
	* Profile "addons-595492" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-595492"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-595492
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-595492: exit status 85 (74.454617ms)

                                                
                                                
-- stdout --
	* Profile "addons-595492" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-595492"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (190.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-595492 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-595492 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m10.350175041s)
--- PASS: TestAddons/Setup (190.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-595492 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-595492 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-595492 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-595492 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1726d194-1988-44db-87b5-2ddc4498cdb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1726d194-1988-44db-87b5-2ddc4498cdb1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004172444s
addons_test.go:633: (dbg) Run:  kubectl --context addons-595492 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-595492 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-595492 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-595492 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 15.410722ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-pj84h" [572705f6-2f5d-43aa-b391-4385619b7743] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004725866s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-htn7v" [2679ee41-9f70-4f5c-a26d-6b342c3151e6] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003722249s
addons_test.go:331: (dbg) Run:  kubectl --context addons-595492 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-595492 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-595492 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.577472044s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 ip
2025/02/03 11:17:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pxttm" [9dfe55a2-7a6d-439f-9eea-a0111615f250] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004734415s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable inspektor-gadget --alsologtostderr -v=1: (5.791560465s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.070614ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-kdlwk" [6d1bb40a-f8f3-4406-b15a-d6c523995470] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003803982s
addons_test.go:402: (dbg) Run:  kubectl --context addons-595492 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0203 11:17:01.526441  298903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0203 11:17:01.533877  298903 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0203 11:17:01.533921  298903 kapi.go:107] duration metric: took 11.963357ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.978069ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-595492 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-595492 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [81120d28-ba11-41d8-b2ef-63c96438da69] Pending
helpers_test.go:344: "task-pv-pod" [81120d28-ba11-41d8-b2ef-63c96438da69] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [81120d28-ba11-41d8-b2ef-63c96438da69] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004239035s
addons_test.go:511: (dbg) Run:  kubectl --context addons-595492 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-595492 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-595492 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-595492 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-595492 delete pod task-pv-pod: (1.050015243s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-595492 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-595492 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-595492 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7e4ba757-47b0-4c0b-b93e-af9a9729c98a] Pending
helpers_test.go:344: "task-pv-pod-restore" [7e4ba757-47b0-4c0b-b93e-af9a9729c98a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7e4ba757-47b0-4c0b-b93e-af9a9729c98a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003798097s
addons_test.go:553: (dbg) Run:  kubectl --context addons-595492 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-595492 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-595492 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.863736089s)
--- PASS: TestAddons/parallel/CSI (49.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-595492 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-595492 --alsologtostderr -v=1: (1.151954948s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-g5kpn" [21565fb8-915f-454e-9b35-c0cd8c531456] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-g5kpn" [21565fb8-915f-454e-9b35-c0cd8c531456] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-g5kpn" [21565fb8-915f-454e-9b35-c0cd8c531456] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00398237s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable headlamp --alsologtostderr -v=1: (5.901640507s)
--- PASS: TestAddons/parallel/Headlamp (17.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-4vw9k" [8edbf1bd-fa6d-48fb-a0a6-59a3234bb516] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0038274s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-595492 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-595492 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [360bf09f-414b-47f5-8526-8b4ec6e4fd6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [360bf09f-414b-47f5-8526-8b4ec6e4fd6b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [360bf09f-414b-47f5-8526-8b4ec6e4fd6b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00465285s
addons_test.go:906: (dbg) Run:  kubectl --context addons-595492 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 ssh "cat /opt/local-path-provisioner/pvc-0ef87c84-f946-4eaa-bcc6-293143cf15da_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-595492 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-595492 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.319258262s)
--- PASS: TestAddons/parallel/LocalPath (53.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bbdcr" [c4cf8c1e-eadc-4443-b557-2b1d3b1eaee1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004224616s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-s2vnt" [4220c602-7d86-4167-82e3-df155563c178] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004138246s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-595492 addons disable yakd --alsologtostderr -v=1: (5.741212934s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-595492
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-595492: (11.88978771s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-595492
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-595492
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-595492
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (38.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-794088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0203 11:58:07.525656  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-794088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.35478774s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-794088 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-794088 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-794088 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-794088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-794088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-794088: (2.091235937s)
--- PASS: TestCertOptions (38.15s)

                                                
                                    
x
+
TestCertExpiration (235.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-108144 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-108144 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.699636564s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-108144 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0203 12:01:10.592956  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-108144 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.200783608s)
helpers_test.go:175: Cleaning up "cert-expiration-108144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-108144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-108144: (2.458802755s)
--- PASS: TestCertExpiration (235.36s)

                                                
                                    
x
+
TestForceSystemdFlag (41.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-986325 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-986325 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.690118886s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-986325 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-986325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-986325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-986325: (3.013177475s)
--- PASS: TestForceSystemdFlag (41.11s)

                                                
                                    
x
+
TestForceSystemdEnv (45.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-682578 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-682578 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.311364385s)
helpers_test.go:175: Cleaning up "force-systemd-env-682578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-682578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-682578: (2.656505074s)
--- PASS: TestForceSystemdEnv (45.97s)

                                                
                                    
x
+
TestErrorSpam/setup (30.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-085906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-085906 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-085906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-085906 --driver=docker  --container-runtime=crio: (30.821813633s)
--- PASS: TestErrorSpam/setup (30.82s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 status
--- PASS: TestErrorSpam/status (1.35s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 stop: (1.332725104s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-085906 --log_dir /tmp/nospam-085906 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20354-293520/.minikube/files/etc/test/nested/copy/298903/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0203 11:21:25.367152  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.373504  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.385011  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.406472  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.447857  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.529222  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:25.690739  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:26.012336  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:26.654393  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:27.936739  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:30.499586  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:21:35.621509  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-622932 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.472675561s)
--- PASS: TestFunctional/serial/StartWithProxy (47.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0203 11:21:43.582718  298903 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --alsologtostderr -v=8
E0203 11:21:45.863685  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:22:06.345011  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-622932 --alsologtostderr -v=8: (25.952277118s)
functional_test.go:680: soft start took 25.956137595s for "functional-622932" cluster.
I0203 11:22:09.535353  298903 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (25.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-622932 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:3.1: (1.525190409s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:3.3: (1.540483687s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 cache add registry.k8s.io/pause:latest: (1.398638489s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-622932 /tmp/TestFunctionalserialCacheCmdcacheadd_local883191498/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache add minikube-local-cache-test:functional-622932
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache delete minikube-local-cache-test:functional-622932
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-622932
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.729177ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 cache reload: (1.231143485s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 kubectl -- --context functional-622932 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-622932 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0203 11:22:47.306968  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-622932 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.464372132s)
functional_test.go:778: restart took 39.464485174s for "functional-622932" cluster.
I0203 11:22:58.074900  298903 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (39.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-622932 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 logs: (1.708302503s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 logs --file /tmp/TestFunctionalserialLogsFileCmd3765412214/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 logs --file /tmp/TestFunctionalserialLogsFileCmd3765412214/001/logs.txt: (2.169324703s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-622932 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-622932
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-622932: exit status 115 (718.064223ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31604 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-622932 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 config get cpus: exit status 14 (83.852901ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 config get cpus: exit status 14 (80.305398ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-622932 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-622932 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 325401: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-622932 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.509301ms)

                                                
                                                
-- stdout --
	* [functional-622932] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:23:39.206477  325091 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:23:39.206729  325091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:39.206758  325091 out.go:358] Setting ErrFile to fd 2...
	I0203 11:23:39.206777  325091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:39.207063  325091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:23:39.207474  325091 out.go:352] Setting JSON to false
	I0203 11:23:39.211683  325091 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7549,"bootTime":1738574271,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:23:39.211806  325091 start.go:139] virtualization:  
	I0203 11:23:39.215615  325091 out.go:177] * [functional-622932] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0203 11:23:39.218638  325091 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:23:39.218750  325091 notify.go:220] Checking for updates...
	I0203 11:23:39.224711  325091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:23:39.227804  325091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:23:39.231307  325091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:23:39.234486  325091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0203 11:23:39.238542  325091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:23:39.243337  325091 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:23:39.243891  325091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:23:39.274267  325091 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:23:39.274397  325091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:23:39.332034  325091 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-03 11:23:39.322712848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:23:39.332152  325091 docker.go:318] overlay module found
	I0203 11:23:39.335519  325091 out.go:177] * Using the docker driver based on existing profile
	I0203 11:23:39.338427  325091 start.go:297] selected driver: docker
	I0203 11:23:39.338448  325091 start.go:901] validating driver "docker" against &{Name:functional-622932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-622932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:23:39.338564  325091 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:23:39.342012  325091 out.go:201] 
	W0203 11:23:39.344908  325091 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0203 11:23:39.347810  325091 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-622932 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-622932 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (258.251197ms)

                                                
                                                
-- stdout --
	* [functional-622932] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:23:38.935749  325040 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:23:38.936000  325040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:38.936028  325040 out.go:358] Setting ErrFile to fd 2...
	I0203 11:23:38.936068  325040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:38.937103  325040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:23:38.937556  325040 out.go:352] Setting JSON to false
	I0203 11:23:38.938537  325040 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7548,"bootTime":1738574271,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:23:38.938642  325040 start.go:139] virtualization:  
	I0203 11:23:38.942442  325040 out.go:177] * [functional-622932] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0203 11:23:38.945710  325040 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:23:38.945775  325040 notify.go:220] Checking for updates...
	I0203 11:23:38.952810  325040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:23:38.955809  325040 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:23:38.959124  325040 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:23:38.962194  325040 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0203 11:23:38.965204  325040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:23:38.968722  325040 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:23:38.969249  325040 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:23:39.015579  325040 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:23:39.015837  325040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:23:39.095744  325040 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-03 11:23:39.080991051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:23:39.095867  325040 docker.go:318] overlay module found
	I0203 11:23:39.100144  325040 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0203 11:23:39.103835  325040 start.go:297] selected driver: docker
	I0203 11:23:39.103856  325040 start.go:901] validating driver "docker" against &{Name:functional-622932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-622932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:23:39.103978  325040 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:23:39.108243  325040 out.go:201] 
	W0203 11:23:39.111526  325040 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0203 11:23:39.120687  325040 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-622932 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-622932 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-67hv4" [e0a45e36-b01d-4535-a70a-6a8fedf8893e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-67hv4" [e0a45e36-b01d-4535-a70a-6a8fedf8893e] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003711217s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30755
functional_test.go:1692: http://192.168.49.2:30755: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-67hv4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30755
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6655505d-3070-48b6-839e-c1b0f29b88de] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004434055s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-622932 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-622932 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-622932 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622932 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [345ff04c-c13d-4a3f-92a1-8a41706c4ef8] Pending
helpers_test.go:344: "sp-pod" [345ff04c-c13d-4a3f-92a1-8a41706c4ef8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [345ff04c-c13d-4a3f-92a1-8a41706c4ef8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004087476s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-622932 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-622932 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622932 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [112c6587-b9c4-4a98-beb1-e063bfc368ab] Pending
helpers_test.go:344: "sp-pod" [112c6587-b9c4-4a98-beb1-e063bfc368ab] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004291423s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-622932 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh -n functional-622932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cp functional-622932:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3815581984/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh -n functional-622932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh -n functional-622932 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/298903/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /etc/test/nested/copy/298903/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/298903.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /etc/ssl/certs/298903.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/298903.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /usr/share/ca-certificates/298903.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/2989032.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /etc/ssl/certs/2989032.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/2989032.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /usr/share/ca-certificates/2989032.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-622932 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "sudo systemctl is-active docker": exit status 1 (291.937977ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "sudo systemctl is-active containerd": exit status 1 (393.909086ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 323059: os: process already finished
helpers_test.go:502: unable to terminate pid 322913: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-622932 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2958c6f9-e47c-4204-9240-af12df2becec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2958c6f9-e47c-4204-9240-af12df2becec] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004169005s
I0203 11:23:18.248438  298903 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-622932 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.231.81 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-622932 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-622932 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-622932 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-zhps8" [fc3fbfa7-452c-424b-9dce-d256166e2863] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-zhps8" [fc3fbfa7-452c-424b-9dce-d256166e2863] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.029714279s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "355.824218ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "58.734349ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "344.736568ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "56.183201ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdany-port1151532188/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1738581813520716801" to /tmp/TestFunctionalparallelMountCmdany-port1151532188/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1738581813520716801" to /tmp/TestFunctionalparallelMountCmdany-port1151532188/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1738581813520716801" to /tmp/TestFunctionalparallelMountCmdany-port1151532188/001/test-1738581813520716801
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.602389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 11:23:33.860435  298903 retry.go:31] will retry after 678.949465ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  3 11:23 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  3 11:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  3 11:23 test-1738581813520716801
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh cat /mount-9p/test-1738581813520716801
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-622932 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5cbf035-0ad3-4f6b-aa1b-1cf5f125a9c5] Pending
helpers_test.go:344: "busybox-mount" [d5cbf035-0ad3-4f6b-aa1b-1cf5f125a9c5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5cbf035-0ad3-4f6b-aa1b-1cf5f125a9c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5cbf035-0ad3-4f6b-aa1b-1cf5f125a9c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004146566s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-622932 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdany-port1151532188/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service list -o json
functional_test.go:1511: Took "501.210922ms" to run "out/minikube-linux-arm64 -p functional-622932 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31391
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31391
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdspecific-port1199671887/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.546396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 11:23:42.944413  298903 retry.go:31] will retry after 309.33008ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdspecific-port1199671887/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "sudo umount -f /mount-9p": exit status 1 (402.111384ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-622932 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdspecific-port1199671887/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T" /mount1: exit status 1 (869.283666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 11:23:45.411092  298903 retry.go:31] will retry after 370.080211ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-622932 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-622932 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2119480577/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 version -o=json --components: (1.317352862s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622932 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-622932
localhost/kicbase/echo-server:functional-622932
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622932 image ls --format short --alsologtostderr:
I0203 11:23:58.385539  328161 out.go:345] Setting OutFile to fd 1 ...
I0203 11:23:58.385770  328161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.385802  328161 out.go:358] Setting ErrFile to fd 2...
I0203 11:23:58.385823  328161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.386080  328161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
I0203 11:23:58.387718  328161 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.387923  328161 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.388492  328161 cli_runner.go:164] Run: docker container inspect functional-622932 --format={{.State.Status}}
I0203 11:23:58.406789  328161 ssh_runner.go:195] Run: systemctl --version
I0203 11:23:58.406902  328161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622932
I0203 11:23:58.426160  328161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/functional-622932/id_rsa Username:docker}
I0203 11:23:58.513410  328161 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622932 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| docker.io/library/nginx                 | alpine             | f9d642c42f7bc | 52.3MB |
| docker.io/library/nginx                 | latest             | 781d902f1e046 | 201MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| localhost/kicbase/echo-server           | functional-622932  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
| localhost/minikube-local-cache-test     | functional-622932  | 5208e3a043110 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622932 image ls --format table --alsologtostderr:
I0203 11:23:59.206431  328361 out.go:345] Setting OutFile to fd 1 ...
I0203 11:23:59.206690  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:59.206719  328361 out.go:358] Setting ErrFile to fd 2...
I0203 11:23:59.206738  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:59.207036  328361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
I0203 11:23:59.207763  328361 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:59.207947  328361 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:59.208465  328361 cli_runner.go:164] Run: docker container inspect functional-622932 --format={{.State.Status}}
I0203 11:23:59.241287  328361 ssh_runner.go:195] Run: systemctl --version
I0203 11:23:59.241340  328361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622932
I0203 11:23:59.260789  328361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/functional-622932/id_rsa Username:docker}
I0203 11:23:59.349057  328361 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622932 image ls --format json --alsologtostderr:
[{"id":"781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712"],"repoTags":["docker.io/library/nginx:latest"],"size":"201125287"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-622932"],"size":"478822
9"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b146
2b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","
repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"5208e3a04311040b2f0e3511b8fa7da9d808223c0686ae84cf9ca7b76589a6b8","repoDigests":["localhost/minikube-local-cache-test@sha256:37c4ad81a10e22ea9baf519be4e7aa989c8113589055c8d16c56330e1d3fbd08"],"repoTags":["localhost/minikube-local-cache-test:functional-622932"],"size":"3330"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["reg
istry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"siz
e":"98313623"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52333544"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f35
6d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622932 image ls --format json --alsologtostderr:
I0203 11:23:58.906699  328302 out.go:345] Setting OutFile to fd 1 ...
I0203 11:23:58.906950  328302 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.906967  328302 out.go:358] Setting ErrFile to fd 2...
I0203 11:23:58.906973  328302 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.907268  328302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
I0203 11:23:58.907982  328302 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.908119  328302 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.908643  328302 cli_runner.go:164] Run: docker container inspect functional-622932 --format={{.State.Status}}
I0203 11:23:58.951570  328302 ssh_runner.go:195] Run: systemctl --version
I0203 11:23:58.951630  328302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622932
I0203 11:23:58.980452  328302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/functional-622932/id_rsa Username:docker}
I0203 11:23:59.069879  328302 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622932 image ls --format yaml --alsologtostderr:
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-622932
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712
repoTags:
- docker.io/library/nginx:latest
size: "201125287"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "52333544"
- id: 5208e3a04311040b2f0e3511b8fa7da9d808223c0686ae84cf9ca7b76589a6b8
repoDigests:
- localhost/minikube-local-cache-test@sha256:37c4ad81a10e22ea9baf519be4e7aa989c8113589055c8d16c56330e1d3fbd08
repoTags:
- localhost/minikube-local-cache-test:functional-622932
size: "3330"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622932 image ls --format yaml --alsologtostderr:
I0203 11:23:58.641133  328213 out.go:345] Setting OutFile to fd 1 ...
I0203 11:23:58.641360  328213 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.641389  328213 out.go:358] Setting ErrFile to fd 2...
I0203 11:23:58.641407  328213 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.641752  328213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
I0203 11:23:58.642532  328213 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.642717  328213 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.643307  328213 cli_runner.go:164] Run: docker container inspect functional-622932 --format={{.State.Status}}
I0203 11:23:58.665469  328213 ssh_runner.go:195] Run: systemctl --version
I0203 11:23:58.665524  328213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622932
I0203 11:23:58.685954  328213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/functional-622932/id_rsa Username:docker}
I0203 11:23:58.777868  328213 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-622932 ssh pgrep buildkitd: exit status 1 (315.71176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image build -t localhost/my-image:functional-622932 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 image build -t localhost/my-image:functional-622932 testdata/build --alsologtostderr: (3.237473821s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-622932 image build -t localhost/my-image:functional-622932 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f51b074163d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-622932
--> d2ef9db63a6
Successfully tagged localhost/my-image:functional-622932
d2ef9db63a641c88c5e12f10cf252010d3ee8f0aa608b696581d2307bf0c7cc8
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-622932 image build -t localhost/my-image:functional-622932 testdata/build --alsologtostderr:
I0203 11:23:58.961097  328307 out.go:345] Setting OutFile to fd 1 ...
I0203 11:23:58.962854  328307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.962911  328307 out.go:358] Setting ErrFile to fd 2...
I0203 11:23:58.962934  328307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:23:58.963273  328307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
I0203 11:23:58.964076  328307 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.964812  328307 config.go:182] Loaded profile config "functional-622932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 11:23:58.965356  328307 cli_runner.go:164] Run: docker container inspect functional-622932 --format={{.State.Status}}
I0203 11:23:58.992060  328307 ssh_runner.go:195] Run: systemctl --version
I0203 11:23:58.992132  328307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622932
I0203 11:23:59.028001  328307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/functional-622932/id_rsa Username:docker}
I0203 11:23:59.123364  328307 build_images.go:161] Building image from path: /tmp/build.298472235.tar
I0203 11:23:59.123434  328307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0203 11:23:59.133276  328307 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.298472235.tar
I0203 11:23:59.138168  328307 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.298472235.tar: stat -c "%s %y" /var/lib/minikube/build/build.298472235.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.298472235.tar': No such file or directory
I0203 11:23:59.138202  328307 ssh_runner.go:362] scp /tmp/build.298472235.tar --> /var/lib/minikube/build/build.298472235.tar (3072 bytes)
I0203 11:23:59.167531  328307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.298472235
I0203 11:23:59.176976  328307 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.298472235 -xf /var/lib/minikube/build/build.298472235.tar
I0203 11:23:59.186522  328307 crio.go:315] Building image: /var/lib/minikube/build/build.298472235
I0203 11:23:59.186597  328307 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-622932 /var/lib/minikube/build/build.298472235 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0203 11:24:02.096518  328307 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-622932 /var/lib/minikube/build/build.298472235 --cgroup-manager=cgroupfs: (2.909854647s)
I0203 11:24:02.096703  328307 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.298472235
I0203 11:24:02.106825  328307 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.298472235.tar
I0203 11:24:02.116251  328307 build_images.go:217] Built localhost/my-image:functional-622932 from /tmp/build.298472235.tar
I0203 11:24:02.116287  328307 build_images.go:133] succeeded building to: functional-622932
I0203 11:24:02.116294  328307 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-622932
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image load --daemon kicbase/echo-server:functional-622932 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-622932 image load --daemon kicbase/echo-server:functional-622932 --alsologtostderr: (2.106360485s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image load --daemon kicbase/echo-server:functional-622932 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-622932
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image load --daemon kicbase/echo-server:functional-622932 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image save kicbase/echo-server:functional-622932 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image rm kicbase/echo-server:functional-622932 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2025/02/03 11:23:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-622932
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 image save --daemon kicbase/echo-server:functional-622932 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-622932
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-622932 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-622932
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-622932
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-622932
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-395034 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0203 11:24:09.228718  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:26:25.365769  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:26:53.071109  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-395034 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m0.514586838s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-395034 -- rollout status deployment/busybox: (5.68743892s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-fsl9v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-hk5b7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-rcjfl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-fsl9v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-hk5b7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-rcjfl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-fsl9v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-hk5b7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-rcjfl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-fsl9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-fsl9v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-hk5b7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-hk5b7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-rcjfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-395034 -- exec busybox-58667487b6-rcjfl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-395034 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-395034 -v=7 --alsologtostderr: (36.966472752s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-395034 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.035616967s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp testdata/cp-test.txt ha-395034:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63027208/001/cp-test_ha-395034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034:/home/docker/cp-test.txt ha-395034-m02:/home/docker/cp-test_ha-395034_ha-395034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test_ha-395034_ha-395034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034:/home/docker/cp-test.txt ha-395034-m03:/home/docker/cp-test_ha-395034_ha-395034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test_ha-395034_ha-395034-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034:/home/docker/cp-test.txt ha-395034-m04:/home/docker/cp-test_ha-395034_ha-395034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test_ha-395034_ha-395034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp testdata/cp-test.txt ha-395034-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63027208/001/cp-test_ha-395034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m02:/home/docker/cp-test.txt ha-395034:/home/docker/cp-test_ha-395034-m02_ha-395034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test_ha-395034-m02_ha-395034.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m02:/home/docker/cp-test.txt ha-395034-m03:/home/docker/cp-test_ha-395034-m02_ha-395034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test_ha-395034-m02_ha-395034-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m02:/home/docker/cp-test.txt ha-395034-m04:/home/docker/cp-test_ha-395034-m02_ha-395034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test_ha-395034-m02_ha-395034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp testdata/cp-test.txt ha-395034-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63027208/001/cp-test_ha-395034-m03.txt
E0203 11:28:07.522909  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:28:07.529208  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test.txt"
E0203 11:28:07.541439  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:28:07.564789  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:28:07.606112  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:28:07.687459  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m03:/home/docker/cp-test.txt ha-395034:/home/docker/cp-test_ha-395034-m03_ha-395034.txt
E0203 11:28:07.849104  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:28:08.170753  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test_ha-395034-m03_ha-395034.txt"
E0203 11:28:08.813438  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m03:/home/docker/cp-test.txt ha-395034-m02:/home/docker/cp-test_ha-395034-m03_ha-395034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test_ha-395034-m03_ha-395034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m03:/home/docker/cp-test.txt ha-395034-m04:/home/docker/cp-test_ha-395034-m03_ha-395034-m04.txt
E0203 11:28:10.095305  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test_ha-395034-m03_ha-395034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp testdata/cp-test.txt ha-395034-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63027208/001/cp-test_ha-395034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m04:/home/docker/cp-test.txt ha-395034:/home/docker/cp-test_ha-395034-m04_ha-395034.txt
E0203 11:28:12.657649  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034 "sudo cat /home/docker/cp-test_ha-395034-m04_ha-395034.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m04:/home/docker/cp-test.txt ha-395034-m02:/home/docker/cp-test_ha-395034-m04_ha-395034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m02 "sudo cat /home/docker/cp-test_ha-395034-m04_ha-395034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 cp ha-395034-m04:/home/docker/cp-test.txt ha-395034-m03:/home/docker/cp-test_ha-395034-m04_ha-395034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 ssh -n ha-395034-m03 "sudo cat /home/docker/cp-test_ha-395034-m04_ha-395034-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 node stop m02 -v=7 --alsologtostderr
E0203 11:28:17.779336  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-395034 node stop m02 -v=7 --alsologtostderr: (11.919209491s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
E0203 11:28:28.021051  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr: exit status 7 (758.556847ms)

                                                
                                                
-- stdout --
	ha-395034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-395034-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-395034-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-395034-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:28:27.647833  344380 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:28:27.648006  344380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:28:27.648020  344380 out.go:358] Setting ErrFile to fd 2...
	I0203 11:28:27.648026  344380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:28:27.648315  344380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:28:27.648530  344380 out.go:352] Setting JSON to false
	I0203 11:28:27.648687  344380 notify.go:220] Checking for updates...
	I0203 11:28:27.649608  344380 mustload.go:65] Loading cluster: ha-395034
	I0203 11:28:27.650088  344380 config.go:182] Loaded profile config "ha-395034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:28:27.650106  344380 status.go:174] checking status of ha-395034 ...
	I0203 11:28:27.650917  344380 cli_runner.go:164] Run: docker container inspect ha-395034 --format={{.State.Status}}
	I0203 11:28:27.672255  344380 status.go:371] ha-395034 host status = "Running" (err=<nil>)
	I0203 11:28:27.672283  344380 host.go:66] Checking if "ha-395034" exists ...
	I0203 11:28:27.672681  344380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-395034
	I0203 11:28:27.703131  344380 host.go:66] Checking if "ha-395034" exists ...
	I0203 11:28:27.703459  344380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:28:27.703518  344380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-395034
	I0203 11:28:27.723399  344380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/ha-395034/id_rsa Username:docker}
	I0203 11:28:27.810559  344380 ssh_runner.go:195] Run: systemctl --version
	I0203 11:28:27.815693  344380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:28:27.832804  344380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:28:27.928841  344380 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-03 11:28:27.919453991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:28:27.929471  344380 kubeconfig.go:125] found "ha-395034" server: "https://192.168.49.254:8443"
	I0203 11:28:27.929514  344380 api_server.go:166] Checking apiserver status ...
	I0203 11:28:27.929563  344380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:28:27.941964  344380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	I0203 11:28:27.952687  344380 api_server.go:182] apiserver freezer: "10:freezer:/docker/017e95c7c08d770a0868ca70961d3822d2a4704ddf02155286f71c616a47ac46/crio/crio-b7b7ce94d6aac47264e30b63672cf378b99e52f211532c243ed3da3fc743280b"
	I0203 11:28:27.952762  344380 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/017e95c7c08d770a0868ca70961d3822d2a4704ddf02155286f71c616a47ac46/crio/crio-b7b7ce94d6aac47264e30b63672cf378b99e52f211532c243ed3da3fc743280b/freezer.state
	I0203 11:28:27.961830  344380 api_server.go:204] freezer state: "THAWED"
	I0203 11:28:27.961861  344380 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0203 11:28:27.970267  344380 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0203 11:28:27.970297  344380 status.go:463] ha-395034 apiserver status = Running (err=<nil>)
	I0203 11:28:27.970309  344380 status.go:176] ha-395034 status: &{Name:ha-395034 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:28:27.970326  344380 status.go:174] checking status of ha-395034-m02 ...
	I0203 11:28:27.970637  344380 cli_runner.go:164] Run: docker container inspect ha-395034-m02 --format={{.State.Status}}
	I0203 11:28:27.987970  344380 status.go:371] ha-395034-m02 host status = "Stopped" (err=<nil>)
	I0203 11:28:27.987998  344380 status.go:384] host is not running, skipping remaining checks
	I0203 11:28:27.988005  344380 status.go:176] ha-395034-m02 status: &{Name:ha-395034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:28:27.988026  344380 status.go:174] checking status of ha-395034-m03 ...
	I0203 11:28:27.988353  344380 cli_runner.go:164] Run: docker container inspect ha-395034-m03 --format={{.State.Status}}
	I0203 11:28:28.011187  344380 status.go:371] ha-395034-m03 host status = "Running" (err=<nil>)
	I0203 11:28:28.011216  344380 host.go:66] Checking if "ha-395034-m03" exists ...
	I0203 11:28:28.011563  344380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-395034-m03
	I0203 11:28:28.032380  344380 host.go:66] Checking if "ha-395034-m03" exists ...
	I0203 11:28:28.032744  344380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:28:28.032788  344380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-395034-m03
	I0203 11:28:28.051255  344380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/ha-395034-m03/id_rsa Username:docker}
	I0203 11:28:28.141590  344380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:28:28.153708  344380 kubeconfig.go:125] found "ha-395034" server: "https://192.168.49.254:8443"
	I0203 11:28:28.153747  344380 api_server.go:166] Checking apiserver status ...
	I0203 11:28:28.153793  344380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:28:28.164686  344380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1392/cgroup
	I0203 11:28:28.173834  344380 api_server.go:182] apiserver freezer: "10:freezer:/docker/d3907f7db1140eab5e19e1e7db35cb1c112ff254257ca0a824f574d4702dea55/crio/crio-3b7267ce45bd416611beaf222185a9195c97c550916afcb789eb3d089fc533f1"
	I0203 11:28:28.173911  344380 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3907f7db1140eab5e19e1e7db35cb1c112ff254257ca0a824f574d4702dea55/crio/crio-3b7267ce45bd416611beaf222185a9195c97c550916afcb789eb3d089fc533f1/freezer.state
	I0203 11:28:28.182621  344380 api_server.go:204] freezer state: "THAWED"
	I0203 11:28:28.182652  344380 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0203 11:28:28.190951  344380 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0203 11:28:28.191039  344380 status.go:463] ha-395034-m03 apiserver status = Running (err=<nil>)
	I0203 11:28:28.191064  344380 status.go:176] ha-395034-m03 status: &{Name:ha-395034-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:28:28.191110  344380 status.go:174] checking status of ha-395034-m04 ...
	I0203 11:28:28.191453  344380 cli_runner.go:164] Run: docker container inspect ha-395034-m04 --format={{.State.Status}}
	I0203 11:28:28.211430  344380 status.go:371] ha-395034-m04 host status = "Running" (err=<nil>)
	I0203 11:28:28.211459  344380 host.go:66] Checking if "ha-395034-m04" exists ...
	I0203 11:28:28.211757  344380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-395034-m04
	I0203 11:28:28.229169  344380 host.go:66] Checking if "ha-395034-m04" exists ...
	I0203 11:28:28.229476  344380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:28:28.229525  344380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-395034-m04
	I0203 11:28:28.246352  344380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/ha-395034-m04/id_rsa Username:docker}
	I0203 11:28:28.333666  344380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:28:28.347612  344380 status.go:176] ha-395034-m04 status: &{Name:ha-395034-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 node start m02 -v=7 --alsologtostderr
E0203 11:28:48.502714  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-395034 node start m02 -v=7 --alsologtostderr: (21.627042291s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr: (1.160199692s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-395034 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-395034 -v=7 --alsologtostderr
E0203 11:29:29.464104  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-395034 -v=7 --alsologtostderr: (36.898146551s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-395034 --wait=true -v=7 --alsologtostderr
E0203 11:30:51.385788  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:31:25.365483  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-395034 --wait=true -v=7 --alsologtostderr: (2m44.435426407s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-395034
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-395034 node delete m03 -v=7 --alsologtostderr: (11.681092377s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-395034 stop -v=7 --alsologtostderr: (35.622271559s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr: exit status 7 (125.440524ms)

                                                
                                                
-- stdout --
	ha-395034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-395034-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-395034-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:33:03.417681  358797 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:33:03.417874  358797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:33:03.417901  358797 out.go:358] Setting ErrFile to fd 2...
	I0203 11:33:03.417919  358797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:33:03.418216  358797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:33:03.418457  358797 out.go:352] Setting JSON to false
	I0203 11:33:03.418524  358797 mustload.go:65] Loading cluster: ha-395034
	I0203 11:33:03.418612  358797 notify.go:220] Checking for updates...
	I0203 11:33:03.419093  358797 config.go:182] Loaded profile config "ha-395034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:33:03.419431  358797 status.go:174] checking status of ha-395034 ...
	I0203 11:33:03.420205  358797 cli_runner.go:164] Run: docker container inspect ha-395034 --format={{.State.Status}}
	I0203 11:33:03.437828  358797 status.go:371] ha-395034 host status = "Stopped" (err=<nil>)
	I0203 11:33:03.437849  358797 status.go:384] host is not running, skipping remaining checks
	I0203 11:33:03.437856  358797 status.go:176] ha-395034 status: &{Name:ha-395034 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:33:03.437883  358797 status.go:174] checking status of ha-395034-m02 ...
	I0203 11:33:03.438188  358797 cli_runner.go:164] Run: docker container inspect ha-395034-m02 --format={{.State.Status}}
	I0203 11:33:03.455188  358797 status.go:371] ha-395034-m02 host status = "Stopped" (err=<nil>)
	I0203 11:33:03.455216  358797 status.go:384] host is not running, skipping remaining checks
	I0203 11:33:03.455223  358797 status.go:176] ha-395034-m02 status: &{Name:ha-395034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:33:03.455247  358797 status.go:174] checking status of ha-395034-m04 ...
	I0203 11:33:03.455576  358797 cli_runner.go:164] Run: docker container inspect ha-395034-m04 --format={{.State.Status}}
	I0203 11:33:03.487443  358797 status.go:371] ha-395034-m04 host status = "Stopped" (err=<nil>)
	I0203 11:33:03.487469  358797 status.go:384] host is not running, skipping remaining checks
	I0203 11:33:03.487477  358797 status.go:176] ha-395034-m04 status: &{Name:ha-395034-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (113.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-395034 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0203 11:33:07.522782  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:33:35.227967  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-395034 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m52.641791992s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (113.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-395034 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-395034 --control-plane -v=7 --alsologtostderr: (1m13.60921618s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-395034 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-775529 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0203 11:36:25.365004  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-775529 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.066019212s)
--- PASS: TestJSONOutput/start/Command (49.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-775529 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-775529 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-775529 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-775529 --output=json --user=testUser: (5.778733012s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-183258 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-183258 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.309018ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16e300c8-871e-44a4-a5d5-d38e41bf7f1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-183258] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e5fb064-ef18-4d4a-a357-5bc55696cd46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20354"}}
	{"specversion":"1.0","id":"74e5ae0b-95f6-4129-b6b3-7be70e42c63d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"390c32d8-49c4-4ac9-9951-2ab066e97e35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig"}}
	{"specversion":"1.0","id":"da5779e3-85ec-4efe-b9a6-4b76b44d7bad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube"}}
	{"specversion":"1.0","id":"58dea378-985a-49e4-86bf-1908a6e9e6f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"05aa0987-2614-4773-a45f-80f01fe9b004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"25348f83-d8a8-48ad-bc9d-b57a736b8601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-183258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-183258
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-750287 --network=
E0203 11:37:48.432522  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-750287 --network=: (38.175003292s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-750287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-750287
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-750287: (2.110728205s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-232659 --network=bridge
E0203 11:38:07.522292  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-232659 --network=bridge: (31.4744164s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-232659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-232659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-232659: (1.976068519s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.48s)

                                                
                                    
x
+
TestKicExistingNetwork (32.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0203 11:38:37.098963  298903 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0203 11:38:37.114446  298903 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0203 11:38:37.116098  298903 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0203 11:38:37.116135  298903 cli_runner.go:164] Run: docker network inspect existing-network
W0203 11:38:37.132030  298903 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0203 11:38:37.132061  298903 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0203 11:38:37.132075  298903 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0203 11:38:37.132261  298903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0203 11:38:37.150018  298903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-73395abcc82f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a0:9d:0b:5b} reservation:<nil>}
I0203 11:38:37.150395  298903 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ec1480}
I0203 11:38:37.150419  298903 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0203 11:38:37.150472  298903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0203 11:38:37.224073  298903 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-099065 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-099065 --network=existing-network: (29.999275711s)
helpers_test.go:175: Cleaning up "existing-network-099065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-099065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-099065: (2.023190848s)
I0203 11:39:09.263483  298903 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.18s)

                                                
                                    
x
+
TestKicCustomSubnet (31.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-674675 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-674675 --subnet=192.168.60.0/24: (29.523957687s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-674675 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-674675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-674675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-674675: (2.11114583s)
--- PASS: TestKicCustomSubnet (31.65s)

                                                
                                    
x
+
TestKicStaticIP (38.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-457054 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-457054 --static-ip=192.168.200.200: (35.760342746s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-457054 ip
helpers_test.go:175: Cleaning up "static-ip-457054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-457054
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-457054: (2.154147508s)
--- PASS: TestKicStaticIP (38.07s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-554510 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-554510 --driver=docker  --container-runtime=crio: (30.248935131s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-557333 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-557333 --driver=docker  --container-runtime=crio: (32.274294019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-554510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-557333
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-557333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-557333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-557333: (2.030249093s)
helpers_test.go:175: Cleaning up "first-554510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-554510
E0203 11:41:25.365232  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-554510: (2.235701156s)
--- PASS: TestMinikubeProfile (68.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-383811 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-383811 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.412433308s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-383811 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-385567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-385567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.295591414s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-385567 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-383811 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-383811 --alsologtostderr -v=5: (1.616672446s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-385567 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-385567
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-385567: (1.20413191s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-385567
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-385567: (6.887648898s)
--- PASS: TestMountStart/serial/RestartStopped (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-385567 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0203 11:43:07.521912  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m16.75341981s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-852659 -- rollout status deployment/busybox: (4.135696957s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-qbkk2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-rh26t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-qbkk2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-rh26t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-qbkk2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-rh26t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-qbkk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-qbkk2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-rh26t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852659 -- exec busybox-58667487b6-rh26t -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-852659 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-852659 -v 3 --alsologtostderr: (28.811533457s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-852659 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp testdata/cp-test.txt multinode-852659:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile481321984/001/cp-test_multinode-852659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659:/home/docker/cp-test.txt multinode-852659-m02:/home/docker/cp-test_multinode-852659_multinode-852659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test_multinode-852659_multinode-852659-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659:/home/docker/cp-test.txt multinode-852659-m03:/home/docker/cp-test_multinode-852659_multinode-852659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test_multinode-852659_multinode-852659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp testdata/cp-test.txt multinode-852659-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile481321984/001/cp-test_multinode-852659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m02:/home/docker/cp-test.txt multinode-852659:/home/docker/cp-test_multinode-852659-m02_multinode-852659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test_multinode-852659-m02_multinode-852659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m02:/home/docker/cp-test.txt multinode-852659-m03:/home/docker/cp-test_multinode-852659-m02_multinode-852659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test_multinode-852659-m02_multinode-852659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp testdata/cp-test.txt multinode-852659-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile481321984/001/cp-test_multinode-852659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m03:/home/docker/cp-test.txt multinode-852659:/home/docker/cp-test_multinode-852659-m03_multinode-852659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659 "sudo cat /home/docker/cp-test_multinode-852659-m03_multinode-852659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 cp multinode-852659-m03:/home/docker/cp-test.txt multinode-852659-m02:/home/docker/cp-test_multinode-852659-m03_multinode-852659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 ssh -n multinode-852659-m02 "sudo cat /home/docker/cp-test_multinode-852659-m03_multinode-852659-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-852659 node stop m03: (1.394198551s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852659 status: exit status 7 (594.412449ms)

                                                
                                                
-- stdout --
	multinode-852659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-852659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-852659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr: exit status 7 (515.206394ms)

                                                
                                                
-- stdout --
	multinode-852659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-852659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-852659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:44:01.144628  412413 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:44:01.144751  412413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:44:01.144761  412413 out.go:358] Setting ErrFile to fd 2...
	I0203 11:44:01.144767  412413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:44:01.145039  412413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:44:01.145242  412413 out.go:352] Setting JSON to false
	I0203 11:44:01.145298  412413 mustload.go:65] Loading cluster: multinode-852659
	I0203 11:44:01.145365  412413 notify.go:220] Checking for updates...
	I0203 11:44:01.145743  412413 config.go:182] Loaded profile config "multinode-852659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:44:01.145766  412413 status.go:174] checking status of multinode-852659 ...
	I0203 11:44:01.146683  412413 cli_runner.go:164] Run: docker container inspect multinode-852659 --format={{.State.Status}}
	I0203 11:44:01.166302  412413 status.go:371] multinode-852659 host status = "Running" (err=<nil>)
	I0203 11:44:01.166340  412413 host.go:66] Checking if "multinode-852659" exists ...
	I0203 11:44:01.166696  412413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-852659
	I0203 11:44:01.191441  412413 host.go:66] Checking if "multinode-852659" exists ...
	I0203 11:44:01.191769  412413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:44:01.191830  412413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-852659
	I0203 11:44:01.215567  412413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/multinode-852659/id_rsa Username:docker}
	I0203 11:44:01.306344  412413 ssh_runner.go:195] Run: systemctl --version
	I0203 11:44:01.310979  412413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:44:01.323217  412413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:44:01.381570  412413 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-03 11:44:01.371343849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:44:01.382200  412413 kubeconfig.go:125] found "multinode-852659" server: "https://192.168.67.2:8443"
	I0203 11:44:01.382240  412413 api_server.go:166] Checking apiserver status ...
	I0203 11:44:01.382292  412413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:44:01.393828  412413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I0203 11:44:01.403818  412413 api_server.go:182] apiserver freezer: "10:freezer:/docker/14af25248052bd5747310072f9dc834326288ff7f786554e017b4a2d57ebcaa1/crio/crio-f1378014c92c9bf68e850a5423851c831908a297910b75ba287380157c1d7b33"
	I0203 11:44:01.403894  412413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/14af25248052bd5747310072f9dc834326288ff7f786554e017b4a2d57ebcaa1/crio/crio-f1378014c92c9bf68e850a5423851c831908a297910b75ba287380157c1d7b33/freezer.state
	I0203 11:44:01.413393  412413 api_server.go:204] freezer state: "THAWED"
	I0203 11:44:01.413424  412413 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0203 11:44:01.421643  412413 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0203 11:44:01.421674  412413 status.go:463] multinode-852659 apiserver status = Running (err=<nil>)
	I0203 11:44:01.421684  412413 status.go:176] multinode-852659 status: &{Name:multinode-852659 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:44:01.421702  412413 status.go:174] checking status of multinode-852659-m02 ...
	I0203 11:44:01.422017  412413 cli_runner.go:164] Run: docker container inspect multinode-852659-m02 --format={{.State.Status}}
	I0203 11:44:01.439584  412413 status.go:371] multinode-852659-m02 host status = "Running" (err=<nil>)
	I0203 11:44:01.439612  412413 host.go:66] Checking if "multinode-852659-m02" exists ...
	I0203 11:44:01.440043  412413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-852659-m02
	I0203 11:44:01.458437  412413 host.go:66] Checking if "multinode-852659-m02" exists ...
	I0203 11:44:01.458777  412413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:44:01.458826  412413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-852659-m02
	I0203 11:44:01.477884  412413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/20354-293520/.minikube/machines/multinode-852659-m02/id_rsa Username:docker}
	I0203 11:44:01.566001  412413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:44:01.578049  412413 status.go:176] multinode-852659-m02 status: &{Name:multinode-852659-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:44:01.578088  412413 status.go:174] checking status of multinode-852659-m03 ...
	I0203 11:44:01.578417  412413 cli_runner.go:164] Run: docker container inspect multinode-852659-m03 --format={{.State.Status}}
	I0203 11:44:01.596714  412413 status.go:371] multinode-852659-m03 host status = "Stopped" (err=<nil>)
	I0203 11:44:01.596738  412413 status.go:384] host is not running, skipping remaining checks
	I0203 11:44:01.596746  412413 status.go:176] multinode-852659-m03 status: &{Name:multinode-852659-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-852659 node start m03 -v=7 --alsologtostderr: (8.884495785s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852659
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-852659
E0203 11:44:30.591413  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-852659: (24.793154349s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852659 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852659 --wait=true -v=8 --alsologtostderr: (58.178424987s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852659
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-852659 node delete m03: (4.594030276s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-852659 stop: (23.603366898s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852659 status: exit status 7 (109.821136ms)

                                                
                                                
-- stdout --
	multinode-852659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-852659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr: exit status 7 (97.975761ms)

                                                
                                                
-- stdout --
	multinode-852659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-852659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:46:03.370179  419852 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:46:03.370366  419852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:46:03.370378  419852 out.go:358] Setting ErrFile to fd 2...
	I0203 11:46:03.370384  419852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:46:03.370671  419852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:46:03.370913  419852 out.go:352] Setting JSON to false
	I0203 11:46:03.370978  419852 mustload.go:65] Loading cluster: multinode-852659
	I0203 11:46:03.371072  419852 notify.go:220] Checking for updates...
	I0203 11:46:03.371467  419852 config.go:182] Loaded profile config "multinode-852659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:46:03.371492  419852 status.go:174] checking status of multinode-852659 ...
	I0203 11:46:03.372294  419852 cli_runner.go:164] Run: docker container inspect multinode-852659 --format={{.State.Status}}
	I0203 11:46:03.392616  419852 status.go:371] multinode-852659 host status = "Stopped" (err=<nil>)
	I0203 11:46:03.392639  419852 status.go:384] host is not running, skipping remaining checks
	I0203 11:46:03.392646  419852 status.go:176] multinode-852659 status: &{Name:multinode-852659 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:46:03.392685  419852 status.go:174] checking status of multinode-852659-m02 ...
	I0203 11:46:03.393009  419852 cli_runner.go:164] Run: docker container inspect multinode-852659-m02 --format={{.State.Status}}
	I0203 11:46:03.417693  419852 status.go:371] multinode-852659-m02 host status = "Stopped" (err=<nil>)
	I0203 11:46:03.417718  419852 status.go:384] host is not running, skipping remaining checks
	I0203 11:46:03.417725  419852 status.go:176] multinode-852659-m02 status: &{Name:multinode-852659-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0203 11:46:25.365345  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.197235935s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852659 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852659
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852659-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-852659-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.436847ms)

                                                
                                                
-- stdout --
	* [multinode-852659-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-852659-m02' is duplicated with machine name 'multinode-852659-m02' in profile 'multinode-852659'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852659-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852659-m03 --driver=docker  --container-runtime=crio: (32.681766624s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-852659
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-852659: exit status 80 (322.585064ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-852659 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-852659-m03 already exists in multinode-852659-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-852659-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-852659-m03: (2.008097906s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.16s)

                                                
                                    
x
+
TestPreload (127.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-792505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0203 11:48:07.522266  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-792505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.164865202s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-792505 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-792505 image pull gcr.io/k8s-minikube/busybox: (3.561926499s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-792505
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-792505: (5.780902814s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-792505 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-792505 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (19.626649997s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-792505 image list
helpers_test.go:175: Cleaning up "test-preload-792505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-792505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-792505: (2.452605956s)
--- PASS: TestPreload (127.92s)

                                                
                                    
x
+
TestScheduledStopUnix (103.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-491572 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-491572 --memory=2048 --driver=docker  --container-runtime=crio: (27.55790906s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-491572 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-491572 -n scheduled-stop-491572
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-491572 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0203 11:50:08.673333  298903 retry.go:31] will retry after 145.325µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.674494  298903 retry.go:31] will retry after 169.077µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.675604  298903 retry.go:31] will retry after 306.042µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.676731  298903 retry.go:31] will retry after 382.45µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.677812  298903 retry.go:31] will retry after 699.645µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.678951  298903 retry.go:31] will retry after 829.72µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.680035  298903 retry.go:31] will retry after 863.647µs: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.681474  298903 retry.go:31] will retry after 2.27021ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.684733  298903 retry.go:31] will retry after 2.849607ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.687970  298903 retry.go:31] will retry after 5.114145ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.694281  298903 retry.go:31] will retry after 3.410871ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.698527  298903 retry.go:31] will retry after 7.06102ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.705698  298903 retry.go:31] will retry after 16.397297ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.722929  298903 retry.go:31] will retry after 17.752268ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.741163  298903 retry.go:31] will retry after 32.873582ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
I0203 11:50:08.774405  298903 retry.go:31] will retry after 30.369252ms: open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/scheduled-stop-491572/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-491572 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-491572 -n scheduled-stop-491572
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-491572
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-491572 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-491572
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-491572: exit status 7 (77.261967ms)

                                                
                                                
-- stdout --
	scheduled-stop-491572
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-491572 -n scheduled-stop-491572
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-491572 -n scheduled-stop-491572: exit status 7 (68.845082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-491572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-491572
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-491572: (4.763161364s)
--- PASS: TestScheduledStopUnix (103.91s)

                                                
                                    
x
+
TestInsufficientStorage (11.16s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-805117 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0203 11:51:25.365417  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-805117 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.630901671s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3da4eea6-133b-429e-946b-44a94b71ae30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-805117] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c22bfb41-ca43-4c26-9c86-2a7ac8138dd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20354"}}
	{"specversion":"1.0","id":"bef6c268-35f8-4dcd-9a24-fc6c4f5be66f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"97ab25b4-2f6e-44ff-89a2-d52058367d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig"}}
	{"specversion":"1.0","id":"ec6e78dc-cd6f-4bef-b742-9e58e9b75009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube"}}
	{"specversion":"1.0","id":"76c575b3-7a0a-4188-acb5-b9ef27ba2197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"504e575b-de3a-44b2-b1a1-185f3164fe7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2b114d24-a475-45c1-af6a-9165246594d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"552ff41b-7c00-43ae-a5e7-d3492ce74669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7bca448c-f35c-4ed4-ad77-ca734f5398aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f81053de-b102-44b9-949b-9074e4d63831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4e08be23-73cd-4449-9561-4b745fe76b65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-805117\" primary control-plane node in \"insufficient-storage-805117\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1950bf71-316b-47ec-a59f-bcd39993611b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"14895970-10f1-4458-91fa-77933644cf9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc4c0d7e-6e5f-49a7-a6f2-d57f39ecfe3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-805117 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-805117 --output=json --layout=cluster: exit status 7 (301.776185ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-805117","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-805117","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:51:33.421874  437642 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-805117" does not appear in /home/jenkins/minikube-integration/20354-293520/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-805117 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-805117 --output=json --layout=cluster: exit status 7 (290.619003ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-805117","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-805117","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:51:33.715529  437704 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-805117" does not appear in /home/jenkins/minikube-integration/20354-293520/kubeconfig
	E0203 11:51:33.725898  437704 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/insufficient-storage-805117/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-805117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-805117
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-805117: (1.934335758s)
--- PASS: TestInsufficientStorage (11.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3142124923 start -p running-upgrade-357657 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3142124923 start -p running-upgrade-357657 --memory=2200 --vm-driver=docker  --container-runtime=crio: (42.68130196s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-357657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-357657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.452865814s)
helpers_test.go:175: Cleaning up "running-upgrade-357657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-357657
E0203 11:56:25.365431  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-357657: (3.080156103s)
--- PASS: TestRunningBinaryUpgrade (82.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (138.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.740948701s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-846184
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-846184: (2.362334076s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-846184 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-846184 status --format={{.Host}}: exit status 7 (115.343018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.5847229s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-846184 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (268.543646ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-846184] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-846184
	    minikube start -p kubernetes-upgrade-846184 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8461842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-846184 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0203 11:54:28.434694  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-846184 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.406661412s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-846184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-846184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-846184: (2.870864321s)
--- PASS: TestKubernetesUpgrade (138.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.446717052 start -p missing-upgrade-423166 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.446717052 start -p missing-upgrade-423166 --memory=2200 --driver=docker  --container-runtime=crio: (1m23.446267254s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-423166
E0203 11:53:07.522260  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-423166: (10.476861832s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-423166
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-423166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-423166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.962758708s)
helpers_test.go:175: Cleaning up "missing-upgrade-423166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-423166
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-423166: (2.458662s)
--- PASS: TestMissingContainerUpgrade (164.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (109.766292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-817995] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-817995 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-817995 --driver=docker  --container-runtime=crio: (39.273303751s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-817995 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --driver=docker  --container-runtime=crio: (5.110574078s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-817995 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-817995 status -o json: exit status 2 (397.457528ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-817995","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-817995
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-817995: (2.152625076s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-817995 --no-kubernetes --driver=docker  --container-runtime=crio: (9.172376615s)
--- PASS: TestNoKubernetes/serial/Start (9.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-817995 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-817995 "sudo systemctl is-active --quiet service kubelet": exit status 1 (358.228109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-817995
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-817995: (1.290252388s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-817995 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-817995 --driver=docker  --container-runtime=crio: (7.322644868s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-817995 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-817995 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.08142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1451518084 start -p stopped-upgrade-889846 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1451518084 start -p stopped-upgrade-889846 --memory=2200 --vm-driver=docker  --container-runtime=crio: (44.242145492s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1451518084 -p stopped-upgrade-889846 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1451518084 -p stopped-upgrade-889846 stop: (4.811500683s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-889846 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-889846 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.588129431s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-889846
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-889846: (1.24728194s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPause/serial/Start (63.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-542999 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-542999 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m3.899849282s)
--- PASS: TestPause/serial/Start (63.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-542999 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-542999 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.584356397s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-849264 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-849264 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (191.286319ms)

                                                
                                                
-- stdout --
	* [false-849264] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:57:12.587230  472571 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:57:12.587421  472571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:57:12.587447  472571 out.go:358] Setting ErrFile to fd 2...
	I0203 11:57:12.587466  472571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:57:12.587831  472571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-293520/.minikube/bin
	I0203 11:57:12.589026  472571 out.go:352] Setting JSON to false
	I0203 11:57:12.590338  472571 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9562,"bootTime":1738574271,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0203 11:57:12.590444  472571 start.go:139] virtualization:  
	I0203 11:57:12.594085  472571 out.go:177] * [false-849264] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0203 11:57:12.597872  472571 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:57:12.598101  472571 notify.go:220] Checking for updates...
	I0203 11:57:12.603617  472571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:57:12.606426  472571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-293520/kubeconfig
	I0203 11:57:12.609175  472571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-293520/.minikube
	I0203 11:57:12.611939  472571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0203 11:57:12.614891  472571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:57:12.618433  472571 config.go:182] Loaded profile config "pause-542999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:57:12.618523  472571 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:57:12.642128  472571 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0203 11:57:12.642259  472571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 11:57:12.707423  472571 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-03 11:57:12.697874581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0203 11:57:12.707541  472571 docker.go:318] overlay module found
	I0203 11:57:12.710713  472571 out.go:177] * Using the docker driver based on user configuration
	I0203 11:57:12.713661  472571 start.go:297] selected driver: docker
	I0203 11:57:12.713686  472571 start.go:901] validating driver "docker" against <nil>
	I0203 11:57:12.713701  472571 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:57:12.717166  472571 out.go:201] 
	W0203 11:57:12.720011  472571 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0203 11:57:12.722725  472571 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-849264 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-542999
contexts:
- context:
cluster: pause-542999
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-542999
name: pause-542999
current-context: pause-542999
kind: Config
preferences: {}
users:
- name: pause-542999
user:
client-certificate: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.crt
client-key: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-849264

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-849264"

                                                
                                                
----------------------- debugLogs end: false-849264 [took: 3.907105483s] --------------------------------
helpers_test.go:175: Cleaning up "false-849264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-849264
--- PASS: TestNetworkPlugins/group/false (4.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-542999 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-542999 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-542999 --output=json --layout=cluster: exit status 2 (424.143279ms)

                                                
                                                
-- stdout --
	{"Name":"pause-542999","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-542999","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-542999 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-542999 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-542999 --alsologtostderr -v=5: (1.078018054s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-542999 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-542999 --alsologtostderr -v=5: (2.931972286s)
--- PASS: TestPause/serial/DeletePaused (2.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-542999
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-542999: exit status 1 (16.420669ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-542999: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (185.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-684402 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-684402 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m5.647447427s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (185.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:01:25.365318  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-178586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m3.509390759s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-684402 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3a4b228-843c-4afc-95ab-9bc448e705f9] Pending
helpers_test.go:344: "busybox" [c3a4b228-843c-4afc-95ab-9bc448e705f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3a4b228-843c-4afc-95ab-9bc448e705f9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00346011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-684402 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-684402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-684402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.353882547s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-684402 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-684402 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-684402 --alsologtostderr -v=3: (12.26373451s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684402 -n old-k8s-version-684402
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684402 -n old-k8s-version-684402: exit status 7 (79.307136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-684402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (128.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-684402 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-684402 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m7.659516237s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684402 -n old-k8s-version-684402
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (128.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-178586 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e57c7e4b-6764-4add-b11b-a3e8b27a8750] Pending
helpers_test.go:344: "busybox" [e57c7e4b-6764-4add-b11b-a3e8b27a8750] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e57c7e4b-6764-4add-b11b-a3e8b27a8750] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004105919s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-178586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.717465137s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-178586 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-178586 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-178586 --alsologtostderr -v=3: (12.206774687s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178586 -n no-preload-178586
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178586 -n no-preload-178586: exit status 7 (90.164243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-178586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (282.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:03:07.522760  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-178586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m42.461865188s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178586 -n no-preload-178586
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (282.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7g5n7" [5147f3e7-a7b3-4903-be2a-27beaad2ccc5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004326714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7g5n7" [5147f3e7-a7b3-4903-be2a-27beaad2ccc5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003422239s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-684402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-684402 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-684402 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684402 -n old-k8s-version-684402
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684402 -n old-k8s-version-684402: exit status 2 (324.389429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684402 -n old-k8s-version-684402
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684402 -n old-k8s-version-684402: exit status 2 (324.679906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-684402 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684402 -n old-k8s-version-684402
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684402 -n old-k8s-version-684402
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-096762 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-096762 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (49.698900874s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-096762 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7b25dca-593a-4ebd-b463-1a97897c64b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a7b25dca-593a-4ebd-b463-1a97897c64b5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004781607s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-096762 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-096762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-096762 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-096762 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-096762 --alsologtostderr -v=3: (11.923364507s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-096762 -n embed-certs-096762
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-096762 -n embed-certs-096762: exit status 7 (80.065468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-096762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-096762 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:06:25.365702  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.179114  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.185623  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.197132  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.218530  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.259951  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.341456  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.503047  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:51.825105  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:52.467227  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:53.749505  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:06:56.310770  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:07:01.432733  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:07:11.674657  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:07:32.156022  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-096762 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m25.628336482s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-096762 -n embed-certs-096762
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zhkd5" [0487cd10-277c-4c66-8ce1-51067198b4ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003855323s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zhkd5" [0487cd10-277c-4c66-8ce1-51067198b4ab] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00404394s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-178586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178586 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-178586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178586 -n no-preload-178586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178586 -n no-preload-178586: exit status 2 (315.318964ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178586 -n no-preload-178586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178586 -n no-preload-178586: exit status 2 (332.29377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-178586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178586 -n no-preload-178586
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178586 -n no-preload-178586
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-835319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:08:07.522280  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:08:13.118585  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-835319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (49.953952312s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-835319 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1aa41241-49a5-45fb-ad2d-d381649f385f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1aa41241-49a5-45fb-ad2d-d381649f385f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00313468s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-835319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-835319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-835319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-835319 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-835319 --alsologtostderr -v=3: (11.947369761s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319: exit status 7 (73.942158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-835319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-835319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:09:35.040527  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-835319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m48.719759803s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6j4ps" [1c642722-b74b-4c68-8705-cf4e9b40babd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004164539s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6j4ps" [1c642722-b74b-4c68-8705-cf4e9b40babd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005013262s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-096762 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-096762 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-096762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-096762 -n embed-certs-096762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-096762 -n embed-certs-096762: exit status 2 (321.439096ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-096762 -n embed-certs-096762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-096762 -n embed-certs-096762: exit status 2 (320.704091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-096762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-096762 -n embed-certs-096762
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-096762 -n embed-certs-096762
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-424439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:11:08.436311  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-424439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (35.141097538s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-424439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-424439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067596209s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-424439 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-424439 --alsologtostderr -v=3: (1.286540071s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424439 -n newest-cni-424439
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424439 -n newest-cni-424439: exit status 7 (77.286327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-424439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-424439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 12:11:25.364866  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-424439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (16.349269698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-424439 -n newest-cni-424439
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-424439 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-424439 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424439 -n newest-cni-424439
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424439 -n newest-cni-424439: exit status 2 (322.098026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424439 -n newest-cni-424439
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424439 -n newest-cni-424439: exit status 2 (330.485116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-424439 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-424439 -n newest-cni-424439
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-424439 -n newest-cni-424439
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0203 12:11:51.178574  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:18.883134  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.674522  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.680836  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.692109  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.713430  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.754759  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.836364  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:28.997724  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:29.319486  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:12:29.960893  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (50.598187006s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-849264 "pgrep -a kubelet"
E0203 12:12:31.243118  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
I0203 12:12:31.460322  298903 config.go:182] Loaded profile config "auto-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8ww4z" [d414503c-833d-4080-b60b-39c066615177] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 12:12:33.804596  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-8ww4z" [d414503c-833d-4080-b60b-39c066615177] Running
E0203 12:12:38.926694  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003895034s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0203 12:13:07.522596  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:13:09.650232  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:13:50.612313  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (48.668719609s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7bkfg" [24fcf0a7-400d-46b9-9b49-15710be8a64b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004369777s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lrwgp" [6edf76bd-a0a7-4252-8e72-66428c70d63b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004377297s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-849264 "pgrep -a kubelet"
I0203 12:13:59.447256  298903 config.go:182] Loaded profile config "kindnet-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ghsjn" [354ddf80-e71e-4db3-a341-ad81e3e79074] Pending
helpers_test.go:344: "netcat-5d86dc444-ghsjn" [354ddf80-e71e-4db3-a341-ad81e3e79074] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ghsjn" [354ddf80-e71e-4db3-a341-ad81e3e79074] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004141704s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lrwgp" [6edf76bd-a0a7-4252-8e72-66428c70d63b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003746148s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-835319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-835319 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-835319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319: exit status 2 (426.84739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319: exit status 2 (498.297231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-835319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835319 -n default-k8s-diff-port-835319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.41s)
E0203 12:18:44.908450  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:44.914886  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:44.926371  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:44.947866  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:44.989362  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:45.074786  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:45.236490  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:45.558478  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:46.200718  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:47.482438  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:50.044212  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.159591  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.166012  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.177440  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.198878  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.240247  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.321704  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.483242  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.672461  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:53.807833  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:54.449780  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:55.165850  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:55.731968  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:18:58.293375  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:19:03.415541  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:19:05.407982  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:19:13.657505  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/kindnet-849264/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m19.382938481s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0203 12:15:12.534163  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.735763039s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hr5w9" [cdd27b32-ccf3-42d5-83e1-3b97e6b170f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004854137s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-849264 "pgrep -a kubelet"
I0203 12:15:39.557498  298903 config.go:182] Loaded profile config "custom-flannel-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-s6mzc" [ad2851a4-6f7b-4846-bb95-cc65fcda3c80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-s6mzc" [ad2851a4-6f7b-4846-bb95-cc65fcda3c80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003994968s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-849264 "pgrep -a kubelet"
I0203 12:15:41.962098  298903 config.go:182] Loaded profile config "calico-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cxzd2" [e61b2433-1ddf-480b-bb9e-2cbc9bfc4c5a] Pending
helpers_test.go:344: "netcat-5d86dc444-cxzd2" [e61b2433-1ddf-480b-bb9e-2cbc9bfc4c5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cxzd2" [e61b2433-1ddf-480b-bb9e-2cbc9bfc4c5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003588953s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.489842107s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0203 12:16:25.365742  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/addons-595492/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:16:51.180275  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/old-k8s-version-684402/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.610063238s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p7htz" [c996d7fe-013e-43a4-83bf-7edbdcd42458] Running
E0203 12:17:28.675265  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/no-preload-178586/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004606349s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-849264 "pgrep -a kubelet"
I0203 12:17:30.401435  298903 config.go:182] Loaded profile config "flannel-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-x8gk2" [10bd104d-a6da-47af-a315-6b44e308fadd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 12:17:31.735603  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:31.741993  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:31.753350  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:31.774767  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:31.816232  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:31.897678  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:32.059320  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:32.381094  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:33.023104  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
E0203 12:17:34.304441  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-x8gk2" [10bd104d-a6da-47af-a315-6b44e308fadd] Running
E0203 12:17:36.865831  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/auto-849264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004215355s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-849264 "pgrep -a kubelet"
I0203 12:17:37.199188  298903 config.go:182] Loaded profile config "enable-default-cni-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g4srr" [bde0b6d1-c258-4bb3-bf3d-e435d6223095] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g4srr" [bde0b6d1-c258-4bb3-bf3d-e435d6223095] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012494916s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0203 12:18:07.522559  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/functional-622932/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-849264 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.844781626s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-849264 "pgrep -a kubelet"
I0203 12:19:16.603971  298903 config.go:182] Loaded profile config "bridge-849264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-849264 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hjtpn" [ddd6ad7f-3ac4-4a6b-bf8f-a145a19a8d65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hjtpn" [ddd6ad7f-3ac4-4a6b-bf8f-a145a19a8d65] Running
E0203 12:19:25.889843  298903 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/default-k8s-diff-port-835319/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003477575s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-849264 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-849264 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-762626 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-762626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-762626
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-595492 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-795474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-795474
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-849264 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-542999
contexts:
- context:
cluster: pause-542999
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-542999
name: pause-542999
current-context: pause-542999
kind: Config
preferences: {}
users:
- name: pause-542999
user:
client-certificate: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.crt
client-key: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-849264

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-849264"

                                                
                                                
----------------------- debugLogs end: kubenet-849264 [took: 5.445190265s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-849264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-849264
--- SKIP: TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-849264 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-849264" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-293520/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-542999
contexts:
- context:
cluster: pause-542999
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-542999
name: pause-542999
current-context: pause-542999
kind: Config
preferences: {}
users:
- name: pause-542999
user:
client-certificate: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.crt
client-key: /home/jenkins/minikube-integration/20354-293520/.minikube/profiles/pause-542999/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-849264

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-849264" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-849264"

                                                
                                                
----------------------- debugLogs end: cilium-849264 [took: 4.015438058s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-849264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-849264
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard